corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-664301
2410.01157
A Deep Learning Approach for Imbalanced Tabular Data in Advertiser Prospecting: A Case of Direct Mail Prospecting
<|reference_start|>A Deep Learning Approach for Imbalanced Tabular Data in Advertiser Prospecting: A Case of Direct Mail Prospecting: Acquiring new customers is a vital process for growing businesses. Prospecting is the process of identifying and marketing to potential customers using methods ranging from online digital advertising, linear television, out of home, and direct mail. Despite the rapid growth in digital advertising (particularly social and search), research shows that direct mail remains one of the most effective ways to acquire new customers. However, there is a notable gap in the application of modern machine learning techniques within the direct mail space, which could significantly enhance targeting and personalization strategies. Methodologies deployed through direct mail are the focus of this paper. In this paper, we propose a supervised learning approach for identifying new customers, i.e., prospecting, which comprises how we define labels for our data and rank potential customers. The casting of prospecting to a supervised learning problem leads to imbalanced tabular data. The current state-of-the-art approach for tabular data is an ensemble of tree-based methods like random forest and XGBoost. We propose a deep learning framework for tabular imbalanced data. This framework is designed to tackle large imbalanced datasets with vast number of numerical and categorical features. Our framework comprises two components: an autoencoder and a feed-forward neural network. We demonstrate the effectiveness of our framework through a transparent real-world case study of prospecting in direct mail advertising. Our results show that our proposed deep learning framework outperforms the state of the art tree-based random forest approach when applied in the real-world.<|reference_end|>
arxiv
@article{farhang2024a, title={A Deep Learning Approach for Imbalanced Tabular Data in Advertiser Prospecting: A Case of Direct Mail Prospecting}, author={Sadegh Farhang, William Hayes, Nick Murphy, Jonathan Neddenriep, Nicholas Tyris}, journal={arXiv preprint arXiv:2410.01157}, year={2024}, archivePrefix={arXiv}, eprint={2410.01157}, primaryClass={cs.LG} }
farhang2024a
arxiv-664302
2410.01158
Modeling the Energy Consumption of the HEVC Software Encoding Process using Processor events
<|reference_start|>Modeling the Energy Consumption of the HEVC Software Encoding Process using Processor events: Developing energy-efficient video encoding algorithms is highly important due to the high processing complexities and, consequently, the high energy demand of the encoding process. To accomplish this, the energy consumption of the video encoders must be studied, which is only possible with a complex and dedicated energy measurement setup. This emphasizes the need for simple energy estimation models, which estimate the energy required for the encoding. Our paper investigates the possibility of estimating the energy demand of a HEVC software CPU-encoding process using processor events. First, we perform energy measurements and obtain processor events using dedicated profiling software. Then, by using the measured energy demand of the encoding process and profiling data, we build an encoding energy estimation model that uses the processor events of the ultrafast encoding preset to obtain the energy estimate for complex encoding presets with a mean absolute percentage error of 5.36% when averaged over all the presets. Additionally, we present an energy model that offers the possibility of obtaining energy distribution among various encoding sub-processes.<|reference_end|>
arxiv
@article{ramasubbu2024modeling, title={Modeling the Energy Consumption of the HEVC Software Encoding Process using Processor events}, author={Geetha Ramasubbu, Andr`e Kaup, Christian Herglotz}, journal={arXiv preprint arXiv:2410.01158}, year={2024}, archivePrefix={arXiv}, eprint={2410.01158}, primaryClass={eess.IV cs.AR} }
ramasubbu2024modeling
arxiv-664303
2410.01160
GraphRevisedIE: Multimodal Information Extraction with Graph-Revised Network
<|reference_start|>GraphRevisedIE: Multimodal Information Extraction with Graph-Revised Network: Key information extraction (KIE) from visually rich documents (VRD) has been a challenging task in document intelligence because of not only the complicated and diverse layouts of VRD that make the model hard to generalize but also the lack of methods to exploit the multimodal features in VRD. In this paper, we propose a light-weight model named GraphRevisedIE that effectively embeds multimodal features such as textual, visual, and layout features from VRD and leverages graph revision and graph convolution to enrich the multimodal embedding with global context. Extensive experiments on multiple real-world datasets show that GraphRevisedIE generalizes to documents of varied layouts and achieves comparable or better performance compared to previous KIE methods. We also publish a business license dataset that contains both real-life and synthesized documents to facilitate research of document KIE.<|reference_end|>
arxiv
@article{cao2024graphrevisedie:, title={GraphRevisedIE: Multimodal Information Extraction with Graph-Revised Network}, author={Panfeng Cao, Jian Wu}, journal={Pattern Recognition Volume 140, August 2023, 109542}, year={2024}, doi={10.1016/j.patcog.2023.109542}, archivePrefix={arXiv}, eprint={2410.01160}, primaryClass={cs.IR cs.CV} }
cao2024graphrevisedie:
arxiv-664304
2410.01162
Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech
<|reference_start|>Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech: As speech becomes an increasingly common modality for interacting with large language models (LLMs), it is becoming desirable to develop systems where LLMs can take into account users' emotions or speaking styles when providing their responses. In this work, we study the potential of an LLM to understand these aspects of speech without fine-tuning its weights. To do this, we utilize an end-to-end system with a speech encoder; the encoder is trained to produce token embeddings such that the LLM's response to an expressive speech prompt is aligned with its response to a semantically matching text prompt where the speaker's emotion has also been specified. We find that this training framework allows the encoder to generate tokens that capture both semantic and paralinguistic information in speech and effectively convey it to the LLM, even when the LLM remains completely frozen. We also explore training on additional emotion and style-related response alignment tasks, finding that they further increase the amount of paralinguistic information explicitly captured in the speech tokens. Experiments demonstrate that our system is able to produce higher quality and more empathetic responses to expressive speech prompts compared to several baselines.<|reference_end|>
arxiv
@article{kang2024frozen, title={Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech}, author={Wonjune Kang, Junteng Jia, Chunyang Wu, Wei Zhou, Egor Lakomkin, Yashesh Gaur, Leda Sari, Suyoun Kim, Ke Li, Jay Mahadeokar, Ozlem Kalinli}, journal={arXiv preprint arXiv:2410.01162}, year={2024}, archivePrefix={arXiv}, eprint={2410.01162}, primaryClass={eess.AS cs.CL cs.SD} }
kang2024frozen
arxiv-664305
2410.01166
Document Type Classification using File Names
<|reference_start|>Document Type Classification using File Names: Rapid document classification is critical in several time-sensitive applications like digital forensics and large-scale media classification. Traditional approaches that rely on heavy-duty deep learning models fall short due to high inference times over vast input datasets and computational resources associated with analyzing whole documents. In this paper, we present a method using lightweight supervised learning models, combined with a TF-IDF feature extraction-based tokenization method, to accurately and efficiently classify documents based solely on file names that substantially reduces inference time. This approach can distinguish ambiguous file names from the indicative file names through confidence scores and through using a negative class representing ambiguous file names. Our results indicate that file name classifiers can process more than 80% of the in-scope data with 96.7% accuracy when tested on a dataset with a large portion of out-of-scope data with respect to the training dataset while being 442.43x faster than more complex models such as DiT. Our method offers a crucial solution for efficiently processing vast datasets in critical scenarios, enabling fast, more reliable document classification.<|reference_end|>
arxiv
@article{li2024document, title={Document Type Classification using File Names}, author={Zhijian Li, Stefan Larson, Kevin Leach}, journal={arXiv preprint arXiv:2410.01166}, year={2024}, archivePrefix={arXiv}, eprint={2410.01166}, primaryClass={cs.CL} }
li2024document
arxiv-664306
2410.01169
GADFA: Generator-Assisted Decision-Focused Approach for Opinion Expressing Timing Identification
<|reference_start|>GADFA: Generator-Assisted Decision-Focused Approach for Opinion Expressing Timing Identification: The advancement of text generation models has granted us the capability to produce coherent and convincing text on demand. Yet, in real-life circumstances, individuals do not continuously generate text or voice their opinions. For instance, consumers pen product reviews after weighing the merits and demerits of a product, and professional analysts issue reports following significant news releases. In essence, opinion expression is typically prompted by particular reasons or signals. Despite long-standing developments in opinion mining, the appropriate timing for expressing an opinion remains largely unexplored. To address this deficit, our study introduces an innovative task - the identification of news-triggered opinion expressing timing. We ground this task in the actions of professional stock analysts and develop a novel dataset for investigation. Our approach is decision-focused, leveraging text generation models to steer the classification model, thus enhancing overall performance. Our experimental findings demonstrate that the text generated by our model contributes fresh insights from various angles, effectively aiding in identifying the optimal timing for opinion expression.<|reference_end|>
arxiv
@article{chen2024gadfa:, title={GADFA: Generator-Assisted Decision-Focused Approach for Opinion Expressing Timing Identification}, author={Chung-Chi Chen, Hiroya Takamura, Ichiro Kobayashi, Yusuke Miyao}, journal={arXiv preprint arXiv:2410.01169}, year={2024}, archivePrefix={arXiv}, eprint={2410.01169}, primaryClass={cs.CL} }
chen2024gadfa:
arxiv-664307
2410.01170
Unifying the Scope of Bridging Anaphora Types in English: Bridging Annotations in ARRAU and GUM
<|reference_start|>Unifying the Scope of Bridging Anaphora Types in English: Bridging Annotations in ARRAU and GUM: Comparing bridging annotations across coreference resources is difficult, largely due to a lack of standardization across definitions and annotation schemas and narrow coverage of disparate text domains across resources. To alleviate domain coverage issues and consolidate schemas, we compare guidelines and use interpretable predictive models to examine the bridging instances annotated in the GUM, GENTLE and ARRAU corpora. Examining these cases, we find that there is a large difference in types of phenomena annotated as bridging. Beyond theoretical results, we release a harmonized, subcategorized version of the test sets of GUM, GENTLE and the ARRAU Wall Street Journal data to promote meaningful and reliable evaluation of bridging resolution across domains.<|reference_end|>
arxiv
@article{levine2024unifying, title={Unifying the Scope of Bridging Anaphora Types in English: Bridging Annotations in ARRAU and GUM}, author={Lauren Levine and Amir Zeldes}, journal={arXiv preprint arXiv:2410.01170}, year={2024}, archivePrefix={arXiv}, eprint={2410.01170}, primaryClass={cs.CL} }
levine2024unifying
arxiv-664308
2410.01171
BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation
<|reference_start|>BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation: Large language models excel at creative generation but continue to struggle with the issues of hallucination and bias. While retrieval-augmented generation (RAG) provides a framework for grounding LLMs' responses in accurate and up-to-date information, it still raises the question of bias: which sources should be selected for inclusion in the context? And how should their importance be weighted? In this paper, we study the challenge of cross-lingual RAG and present a dataset to investigate the robustness of existing systems at answering queries about geopolitical disputes, which exist at the intersection of linguistic, cultural, and political boundaries. Our dataset is sourced from Wikipedia pages containing information relevant to the given queries and we investigate the impact of including additional context, as well as the composition of this context in terms of language and source, on an LLM's response. Our results show that existing RAG systems continue to be challenged by cross-lingual use cases and suffer from a lack of consistency when they are provided with competing information in multiple languages. We present case studies to illustrate these issues and outline steps for future research to address these challenges. We make our dataset and code publicly available at https://github.com/manestay/bordIRlines.<|reference_end|>
arxiv
@article{li2024bordirlines:, title={BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation}, author={Bryan Li, Samar Haider, Fiona Luo, Adwait Agashe, Chris Callison-Burch}, journal={arXiv preprint arXiv:2410.01171}, year={2024}, archivePrefix={arXiv}, eprint={2410.01171}, primaryClass={cs.CL} }
li2024bordirlines:
arxiv-664309
2410.01173
Low depth amplitude estimation without really trying
<|reference_start|>Low depth amplitude estimation without really trying: Standard quantum amplitude estimation algorithms provide quadratic speedup to Monte-Carlo simulations but require a circuit depth that scales as inverse of the estimation error. In view of the shallow depth in near-term devices, the precision achieved by these algorithms would be low. In this paper we bypass this limitation by performing the classical Monte-Carlo method on the quantum algorithm itself, achieving a higher than classical precision using low-depth circuits. We require the quantum algorithm to be weakly biased in order to avoid error accumulation during this process. Our method is parallel and can be as weakly biased as the constituent algorithm in some cases.<|reference_end|>
arxiv
@article{vu2024low, title={Low depth amplitude estimation without really trying}, author={Dinh-Long Vu, Bin Cheng, Patrick Rebentrost}, journal={arXiv preprint arXiv:2410.01173}, year={2024}, archivePrefix={arXiv}, eprint={2410.01173}, primaryClass={quant-ph cs.DS} }
vu2024low
arxiv-664310
2410.01174
Towards Inference-time Category-wise Safety Steering for Large Language Models
<|reference_start|>Towards Inference-time Category-wise Safety Steering for Large Language Models: While large language models (LLMs) have seen unprecedented advancements in capabilities and applications across a variety of use-cases, safety alignment of these models is still an area of active research. The fragile nature of LLMs, even models that have undergone extensive alignment and safety training regimes, warrants additional safety steering steps via training-free, inference-time methods. While recent work in the area of mechanistic interpretability has investigated how activations in latent representation spaces may encode concepts, and thereafter performed representation engineering to induce such concepts in LLM outputs, the applicability of such for safety is relatively under-explored. Unlike recent inference-time safety steering works, in this paper we explore safety steering of LLM outputs using: (i) category-specific steering vectors, thereby enabling fine-grained control over the steering, and (ii) sophisticated methods for extracting informative steering vectors for more effective safety steering while retaining quality of the generated text. We demonstrate our exploration on multiple LLMs and datasets, and showcase the effectiveness of the proposed steering method, along with a discussion on the implications and best practices.<|reference_end|>
arxiv
@article{bhattacharjee2024towards, title={Towards Inference-time Category-wise Safety Steering for Large Language Models}, author={Amrita Bhattacharjee, Shaona Ghosh, Traian Rebedea, Christopher Parisien}, journal={arXiv preprint arXiv:2410.01174}, year={2024}, archivePrefix={arXiv}, eprint={2410.01174}, primaryClass={cs.CL cs.AI} }
bhattacharjee2024towards
arxiv-664311
2410.01176
Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks
<|reference_start|>Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks: Embodied AI is a rapidly advancing field that bridges the gap between cyberspace and physical space, enabling a wide range of applications. This evolution has led to the development of the Vehicular Embodied AI NETwork (VEANET), where advanced AI capabilities are integrated into vehicular systems to enhance autonomous operations and decision-making. Embodied agents, such as Autonomous Vehicles (AVs), are autonomous entities that can perceive their environment and take actions to achieve specific goals, actively interacting with the physical world. Embodied twins are digital models of these embodied agents, with various embodied AI twins for intelligent applications in cyberspace. In VEANET, embodied AI twins act as in-vehicle AI assistants to perform diverse tasks supporting autonomous driving using generative AI models. Due to limited computational resources of AVs, these AVs often offload computationally intensive tasks, such as constructing and updating embodied AI twins, to nearby RSUs. However, since the rapid mobility of AVs and the limited provision coverage of a single RSU, embodied AI twins require dynamic migrations from current RSU to other RSUs in real-time, resulting in the challenge of selecting suitable RSUs for efficient embodied AI twins migrations. Given information asymmetry, AVs cannot know the detailed information of RSUs. To this end, in this paper, we construct a multi-dimensional contract theoretical model between AVs and alternative RSUs. Considering that AVs may exhibit irrational behavior, we utilize prospect theory instead of expected utility theory to model the actual utilities of AVs. Finally, we employ a generative diffusion model-based algorithm to identify the optimal contract designs. Compared with traditional deep reinforcement learning algorithms, numerical results demonstrate the effectiveness of the proposed scheme.<|reference_end|>
arxiv
@article{zhong2024generative, title={Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks}, author={Yue Zhong, Jiawen Kang, Jinbo Wen, Dongdong Ye, Jiangtian Nie, Dusit Niyato, Xiaozheng Gao, Shengli Xie}, journal={arXiv preprint arXiv:2410.01176}, year={2024}, archivePrefix={arXiv}, eprint={2410.01176}, primaryClass={cs.AI} }
zhong2024generative
arxiv-664312
2410.01177
Adaptive Finite Element Method for Phase Field Fracture Models Based on Recovery Error Estimates
<|reference_start|>Adaptive Finite Element Method for Phase Field Fracture Models Based on Recovery Error Estimates: The phase field model is a widely used mathematical approach for describing crack propagation in continuum damage fractures. In the context of phase field fracture simulations, adaptive finite element methods (AFEM) are often employed to address the mesh size dependency of the model. However, existing AFEM approaches for this application frequently rely on heuristic adjustments and empirical parameters for mesh refinement. In this paper, we introduce an adaptive finite element method based on a recovery type posteriori error estimates approach grounded in theoretical analysis. This method transforms the gradient of the numerical solution into a smoother function space, using the difference between the recovered gradient and the original numerical gradient as an error indicator for adaptive mesh refinement. This enables the automatic capture of crack propagation directions without the need for empirical parameters. We have implemented this adaptive method for the Hybrid formulation of the phase field model using the open-source software package FEALPy. The accuracy and efficiency of the proposed approach are demonstrated through simulations of classical 2D and 3D brittle fracture examples, validating the robustness and effectiveness of our implementation.<|reference_end|>
arxiv
@article{tian2024adaptive, title={Adaptive Finite Element Method for Phase Field Fracture Models Based on Recovery Error Estimates}, author={Tian Tian, Chen Chunyu, He Liang, Wei Huayi}, journal={arXiv preprint arXiv:2410.01177}, year={2024}, archivePrefix={arXiv}, eprint={2410.01177}, primaryClass={math.NA cs.NA math.AP} }
tian2024adaptive
arxiv-664313
2410.01180
UAL-Bench: The First Comprehensive Unusual Activity Localization Benchmark
<|reference_start|>UAL-Bench: The First Comprehensive Unusual Activity Localization Benchmark: Localizing unusual activities, such as human errors or surveillance incidents, in videos holds practical significance. However, current video understanding models struggle with localizing these unusual events likely because of their insufficient representation in models' pretraining datasets. To explore foundation models' capability in localizing unusual activity, we introduce UAL-Bench, a comprehensive benchmark for unusual activity localization, featuring three video datasets: UAG-OOPS, UAG-SSBD, UAG-FunQA, and an instruction-tune dataset: OOPS-UAG-Instruct, to improve model capabilities. UAL-Bench evaluates three approaches: Video-Language Models (Vid-LLMs), instruction-tuned Vid-LLMs, and a novel integration of Vision-Language Models and Large Language Models (VLM-LLM). Our results show the VLM-LLM approach excels in localizing short-span unusual events and predicting their onset (start time) more accurately than Vid-LLMs. We also propose a new metric, R@1, TD <= p, to address limitations in existing evaluation methods. Our findings highlight the challenges posed by long-duration videos, particularly in autism diagnosis scenarios, and the need for further advancements in localization techniques. Our work not only provides a benchmark for unusual activity localization but also outlines the key challenges for existing foundation models, suggesting future research directions on this important task.<|reference_end|>
arxiv
@article{abdullah2024ual-bench:, title={UAL-Bench: The First Comprehensive Unusual Activity Localization Benchmark}, author={Hasnat Md Abdullah, Tian Liu, Kangda Wei, Shu Kong, Ruihong Huang}, journal={arXiv preprint arXiv:2410.01180}, year={2024}, archivePrefix={arXiv}, eprint={2410.01180}, primaryClass={cs.CV cs.CL} }
abdullah2024ual-bench:
arxiv-664314
2410.01183
FastLexRank: Efficient Lexical Ranking for Structuring Social Media Posts
<|reference_start|>FastLexRank: Efficient Lexical Ranking for Structuring Social Media Posts: We present FastLexRank\footnote{https://github.com/LiMaoUM/FastLexRank}, an efficient and scalable implementation of the LexRank algorithm for text ranking. Designed to address the computational and memory complexities of the original LexRank method, FastLexRank significantly reduces time and memory requirements from $\mathcal{O}(n^2)$ to $\mathcal{O}(n)$ without compromising the quality or accuracy of the results. By employing an optimized approach to calculating the stationary distribution of sentence graphs, FastLexRank maintains an identical results with the original LexRank scores while enhancing computational efficiency. This paper details the algorithmic improvements that enable the processing of large datasets, such as social media corpora, in real-time. Empirical results demonstrate its effectiveness, and we propose its use in identifying central tweets, which can be further analyzed using advanced NLP techniques. FastLexRank offers a scalable solution for text centrality calculation, addressing the growing need for efficient processing of digital content.<|reference_end|>
arxiv
@article{li2024fastlexrank:, title={FastLexRank: Efficient Lexical Ranking for Structuring Social Media Posts}, author={Mao Li, Frederick Conrad, Johann Gagnon-Bartsch}, journal={arXiv preprint arXiv:2410.01183}, year={2024}, archivePrefix={arXiv}, eprint={2410.01183}, primaryClass={cs.CL stat.CO} }
li2024fastlexrank:
arxiv-664315
2410.01185
Formula-Driven Data Augmentation and Partial Retinal Layer Copying for Retinal Layer Segmentation
<|reference_start|>Formula-Driven Data Augmentation and Partial Retinal Layer Copying for Retinal Layer Segmentation: Major retinal layer segmentation methods from OCT images assume that the retina is flattened in advance, and thus cannot always deal with retinas that have changes in retinal structure due to ophthalmopathy and/or curvature due to myopia. To eliminate the use of flattening in retinal layer segmentation for practicality of such methods, we propose novel data augmentation methods for OCT images. Formula-driven data augmentation (FDDA) emulates a variety of retinal structures by vertically shifting each column of the OCT images according to a given mathematical formula. We also propose partial retinal layer copying (PRLC) that copies a part of the retinal layers and pastes it into a region outside the retinal layers. Through experiments using the OCT MS and Healthy Control dataset and the Duke Cyst DME dataset, we demonstrate that the use of FDDA and PRLC makes it possible to detect the boundaries of retinal layers without flattening even retinal layer segmentation methods that assume flattening of the retina.<|reference_end|>
arxiv
@article{konno2024formula-driven, title={Formula-Driven Data Augmentation and Partial Retinal Layer Copying for Retinal Layer Segmentation}, author={Tsubasa Konno, Takahiro Ninomiya, Kanta Miura, Koichi Ito, Noriko Himori, Parmanand Sharma, Toru Nakazawa, Takafumi Aoki}, journal={arXiv preprint arXiv:2410.01185}, year={2024}, archivePrefix={arXiv}, eprint={2410.01185}, primaryClass={eess.IV cs.CV} }
konno2024formula-driven
arxiv-664316
2410.01186
Efficient PAC Learning of Halfspaces with Constant Malicious Noise Rate
<|reference_start|>Efficient PAC Learning of Halfspaces with Constant Malicious Noise Rate: Understanding noise tolerance of learning algorithms under certain conditions is a central quest in learning theory. In this work, we study the problem of computationally efficient PAC learning of halfspaces in the presence of malicious noise, where an adversary can corrupt both instances and labels of training samples. The best-known noise tolerance either depends on a target error rate under distributional assumptions or on a margin parameter under large-margin conditions. In this work, we show that when both types of conditions are satisfied, it is possible to achieve {\em constant} noise tolerance by minimizing a reweighted hinge loss. Our key ingredients include: 1) an efficient algorithm that finds weights to control the gradient deterioration from corrupted samples, and 2) a new analysis on the robustness of the hinge loss equipped with such weights.<|reference_end|>
arxiv
@article{shen2024efficient, title={Efficient PAC Learning of Halfspaces with Constant Malicious Noise Rate}, author={Jie Shen and Xiaoyu Li}, journal={arXiv preprint arXiv:2410.01186}, year={2024}, archivePrefix={arXiv}, eprint={2410.01186}, primaryClass={cs.LG cs.DS stat.ML} }
shen2024efficient
arxiv-664317
2410.01188
Gold Panning in Vocabulary: An Adaptive Method for Vocabulary Expansion of Domain-Specific LLMs
<|reference_start|>Gold Panning in Vocabulary: An Adaptive Method for Vocabulary Expansion of Domain-Specific LLMs: While Large Language Models (LLMs) demonstrate impressive generation abilities, they frequently struggle when it comes to specialized domains due to their limited domain-specific knowledge. Studies on domain-specific LLMs resort to expanding the vocabulary before fine-tuning on domain-specific corpus, aiming to decrease the sequence length and enhance efficiency during decoding, without thoroughly investigating the results of vocabulary expansion to LLMs over different domains. Our pilot study reveals that expansion with only a subset of the entire vocabulary may lead to superior performance. Guided by the discovery, this paper explores how to identify a vocabulary subset to achieve the optimal results. We introduce VEGAD, an adaptive method that automatically identifies valuable words from a given domain vocabulary. Our method has been validated through experiments on three Chinese datasets, demonstrating its effectiveness. Additionally, we have undertaken comprehensive analyses of the method. The selection of a optimal subset for expansion has shown to enhance performance on both domain-specific tasks and general tasks, showcasing the potential of VEGAD.<|reference_end|>
arxiv
@article{liu2024gold, title={Gold Panning in Vocabulary: An Adaptive Method for Vocabulary Expansion of Domain-Specific LLMs}, author={Chengyuan Liu, Shihang Wang, Lizhi Qing, Kun Kuang, Yangyang Kang, Changlong Sun, Fei Wu}, journal={arXiv preprint arXiv:2410.01188}, year={2024}, archivePrefix={arXiv}, eprint={2410.01188}, primaryClass={cs.CL} }
liu2024gold
arxiv-664318
2410.01189
[Re] Network Deconvolution
<|reference_start|>[Re] Network Deconvolution: Our work aims to reproduce the set of findings published in "Network Deconvolution" by Ye et al. (2020)[1]. That paper proposes an optimization technique for model training in convolutional neural networks. The proposed technique "network deconvolution" is used in convolutional neural networks to remove pixel-wise and channel-wise correlations before data is fed into each layer. In particular, we interrogate the validity of the authors' claim that using network deconvolution instead of batch normalization improves deep learning model performance. Our effort confirms the validity of this claim, successfully reproducing the results reported in Tables 1 and 2 of the original paper. Our study involved 367 unique experiments across multiple architectures, datasets, and hyper parameter configurations. For Table 1, while there were some minor deviations in accuracy when compared to the original values (within 10%), the overall trend was consistent with the original study's findings when training the models with epochs 20 and 100. For Table 2, all 14 reproduced values were consistent with the original values. Additionally, we document the training and testing times for each architecture in Table 1 with 1, 20, and 100 epoch settings for both CIFAR-10 and CIFAR-100 datasets. We document the total execution times for Table 2 architectures with the ImageNet dataset. The data and software used for this reproducibility study are publicly available at https://github.com/lamps-lab/rep-network-deconvolution.<|reference_end|>
arxiv
@article{obadage2024[re], title={[Re] Network Deconvolution}, author={Rochana R. Obadage, Kumushini Thennakoon, Sarah M. Rajtmajer, Jian Wu}, journal={arXiv preprint arXiv:2410.01189}, year={2024}, archivePrefix={arXiv}, eprint={2410.01189}, primaryClass={cs.CV cs.DL cs.LG} }
obadage2024[re]
arxiv-664319
2410.01190
Integrating Visual and Textual Inputs for Searching Large-Scale Map Collections with CLIP
<|reference_start|>Integrating Visual and Textual Inputs for Searching Large-Scale Map Collections with CLIP: Despite the prevalence and historical importance of maps in digital collections, current methods of navigating and exploring map collections are largely restricted to catalog records and structured metadata. In this paper, we explore the potential for interactively searching large-scale map collections using natural language inputs ("maps with sea monsters"), visual inputs (i.e., reverse image search), and multimodal inputs (an example map + "more grayscale"). As a case study, we adopt 562,842 images of maps publicly accessible via the Library of Congress's API. To accomplish this, we use the mulitmodal Contrastive Language-Image Pre-training (CLIP) machine learning model to generate embeddings for these maps, and we develop code to implement exploratory search capabilities with these input strategies. We present results for example searches created in consultation with staff in the Library of Congress's Geography and Map Division and describe the strengths, weaknesses, and possibilities for these search queries. Moreover, we introduce a fine-tuning dataset of 10,504 map-caption pairs, along with an architecture for fine-tuning a CLIP model on this dataset. To facilitate re-use, we provide all of our code in documented, interactive Jupyter notebooks and place all code into the public domain. Lastly, we discuss the opportunities and challenges for applying these approaches across both digitized and born-digital collections held by galleries, libraries, archives, and museums.<|reference_end|>
arxiv
@article{mahowald2024integrating, title={Integrating Visual and Textual Inputs for Searching Large-Scale Map Collections with CLIP}, author={Jamie Mahowald and Benjamin Charles Germain Lee}, journal={arXiv preprint arXiv:2410.01190}, year={2024}, archivePrefix={arXiv}, eprint={2410.01190}, primaryClass={cs.IR cs.DL} }
mahowald2024integrating
arxiv-664320
2410.01195
Stochastic Gradient Descent with Adaptive Data
<|reference_start|>Stochastic Gradient Descent with Adaptive Data: Stochastic gradient descent (SGD) is a powerful optimization technique that is particularly useful in online learning scenarios. Its convergence analysis is relatively well understood under the assumption that the data samples are independent and identically distributed (iid). However, applying SGD to policy optimization problems in operations research involves a distinct challenge: the policy changes the environment and thereby affects the data used to update the policy. The adaptively generated data stream involves samples that are non-stationary, no longer independent from each other, and affected by previous decisions. The influence of previous decisions on the data generated introduces bias in the gradient estimate, which presents a potential source of instability for online learning not present in the iid case. In this paper, we introduce simple criteria for the adaptively generated data stream to guarantee the convergence of SGD. We show that the convergence speed of SGD with adaptive data is largely similar to the classical iid setting, as long as the mixing time of the policy-induced dynamics is factored in. Our Lyapunov-function analysis allows one to translate existing stability analysis of stochastic systems studied in operations research into convergence rates for SGD, and we demonstrate this for queueing and inventory management problems. We also showcase how our result can be applied to study the sample complexity of an actor-critic policy gradient algorithm.<|reference_end|>
arxiv
@article{che2024stochastic, title={Stochastic Gradient Descent with Adaptive Data}, author={Ethan Che and Jing Dong and Xin T. Tong}, journal={arXiv preprint arXiv:2410.01195}, year={2024}, archivePrefix={arXiv}, eprint={2410.01195}, primaryClass={cs.LG math.OC} }
che2024stochastic
arxiv-664321
2410.01196
Diverse Expected Improvement (DEI): Diverse Bayesian Optimization of Expensive Computer Simulators
<|reference_start|>Diverse Expected Improvement (DEI): Diverse Bayesian Optimization of Expensive Computer Simulators: The optimization of expensive black-box simulators arises in a myriad of modern scientific and engineering applications. Bayesian optimization provides an appealing solution, by leveraging a fitted surrogate model to guide the selection of subsequent simulator evaluations. In practice, however, the objective is often not to obtain a single good solution, but rather a ''basket'' of good solutions from which users can choose for downstream decision-making. This need arises in our motivating application for real-time control of internal combustion engines for flight propulsion, where a diverse set of control strategies is essential for stable flight control. There has been little work on this front for Bayesian optimization. We thus propose a new Diverse Expected Improvement (DEI) method that searches for diverse ''$\epsilon$-optimal'' solutions: locally-optimal solutions within a tolerance level $\epsilon > 0$ from a global optimum. We show that DEI yields a closed-form acquisition function under a Gaussian process surrogate model, which facilitates efficient sequential queries via automatic differentiation. This closed form further reveals a novel exploration-exploitation-diversity trade-off, which incorporates the desired diversity property within the well-known exploration-exploitation trade-off. We demonstrate the improvement of DEI over existing methods in a suite of numerical experiments, then explore the DEI in two applications on rover trajectory optimization and engine control for flight propulsion.<|reference_end|>
arxiv
@article{miller2024diverse, title={Diverse Expected Improvement (DEI): Diverse Bayesian Optimization of Expensive Computer Simulators}, author={John Joshua Miller, Simon Mak, Benny Sun, Sai Ranjeet Narayanan, Suo Yang, Zongxuan Sun, Kenneth S. Kim, Chol-Bum Mike Kweon}, journal={arXiv preprint arXiv:2410.01196}, year={2024}, archivePrefix={arXiv}, eprint={2410.01196}, primaryClass={stat.AP cs.LG stat.ML} }
miller2024diverse
arxiv-664322
2410.01201
Were RNNs All We Needed?
<|reference_start|>Were RNNs All We Needed?: The scalability limitations of Transformers regarding sequence length have renewed interest in recurrent sequence models that are parallelizable during training. As a result, many novel recurrent architectures, such as S4, Mamba, and Aaren, have been proposed that achieve comparable performance. In this work, we revisit traditional recurrent neural networks (RNNs) from over a decade ago: LSTMs (1997) and GRUs (2014). While these models were slow due to requiring to backpropagate through time (BPTT), we show that by removing their hidden state dependencies from their input, forget, and update gates, LSTMs and GRUs no longer need to BPTT and can be efficiently trained in parallel. Building on this, we introduce minimal versions (minLSTMs and minGRUs) that (1) use significantly fewer parameters than their traditional counterparts and (2) are fully parallelizable during training (175x faster for a sequence of length 512). Lastly, we show that these stripped-down versions of decade-old RNNs match the empirical performance of recent sequence models.<|reference_end|>
arxiv
@article{feng2024were, title={Were RNNs All We Needed?}, author={Leo Feng, Frederick Tung, Mohamed Osama Ahmed, Yoshua Bengio, Hossein Hajimirsadegh}, journal={arXiv preprint arXiv:2410.01201}, year={2024}, archivePrefix={arXiv}, eprint={2410.01201}, primaryClass={cs.LG cs.AI} }
feng2024were
arxiv-664323
2410.01202
AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction
<|reference_start|>AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction: Neural radiance fields have recently revolutionized novel-view synthesis and achieved high-fidelity renderings. However, these methods sacrifice the geometry for the rendering quality, limiting their further applications including relighting and deformation. How to synthesize photo-realistic rendering while reconstructing accurate geometry remains an unsolved problem. In this work, we present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction. Different from previous neural surfaces, our fused-granularity geometry structure balances the overall structures and fine geometric details, producing accurate geometry reconstruction. To disambiguate geometry from reflective appearance, we introduce blended radiance fields to model diffuse and specularity following the anisotropic spherical Gaussian encoding, a physics-based rendering pipeline. With these designs, AniSDF can reconstruct objects with complex structures and produce high-quality renderings. Furthermore, our method is a unified model that does not require complex hyperparameter tuning for specific objects. Extensive experiments demonstrate that our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.<|reference_end|>
arxiv
@article{gao2024anisdf:, title={AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction}, author={Jingnan Gao, Zhuo Chen, Yichao Yan, Xiaokang Yang}, journal={arXiv preprint arXiv:2410.01202}, year={2024}, archivePrefix={arXiv}, eprint={2410.01202}, primaryClass={cs.CV} }
gao2024anisdf:
arxiv-664324
2410.01208
StringLLM: Understanding the String Processing Capability of Large Language Models
<|reference_start|>StringLLM: Understanding the String Processing Capability of Large Language Models: String processing, which mainly involves the analysis and manipulation of strings, is a fundamental component of modern computing. Despite the significant advancements of large language models (LLMs) in various natural language processing (NLP) tasks, their capability in string processing remains underexplored and underdeveloped. To bridge this gap, we present a comprehensive study of LLMs' string processing capability. In particular, we first propose StringLLM, a method to construct datasets for benchmarking string processing capability of LLMs. We use StringLLM to build a series of datasets, referred to as StringBench. It encompasses a wide range of string processing tasks, allowing us to systematically evaluate LLMs' performance in this area. Our evaluations indicate that LLMs struggle with accurately processing strings compared to humans. To uncover the underlying reasons for this limitation, we conduct an in-depth analysis and subsequently propose an effective approach that significantly enhances LLMs' string processing capability via fine-tuning. This work provides a foundation for future research to understand LLMs' string processing capability. Our code and data are available at https://github.com/wxl-lxw/StringLLM.<|reference_end|>
arxiv
@article{wang2024stringllm:, title={StringLLM: Understanding the String Processing Capability of Large Language Models}, author={Xilong Wang, Hao Fu, Jindong Wang, Neil Zhenqiang Gong}, journal={arXiv preprint arXiv:2410.01208}, year={2024}, archivePrefix={arXiv}, eprint={2410.01208}, primaryClass={cs.CL} }
wang2024stringllm:
arxiv-664325
2410.01209
Debiasing Federated Learning with Correlated Client Participation
<|reference_start|>Debiasing Federated Learning with Correlated Client Participation: In cross-device federated learning (FL) with millions of mobile clients, only a small subset of clients participate in training in every communication round, and Federated Averaging (FedAvg) is the most popular algorithm in practice. Existing analyses of FedAvg usually assume the participating clients are independently sampled in each round from a uniform distribution, which does not reflect real-world scenarios. This paper introduces a theoretical framework that models client participation in FL as a Markov chain to study optimization convergence when clients have non-uniform and correlated participation across rounds. We apply this framework to analyze a more general and practical pattern: every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating. We theoretically prove and empirically observe that increasing minimum separation reduces the bias induced by intrinsic non-uniformity of client availability in cross-device FL systems. Furthermore, we develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution under arbitrary minimum separation and unknown client availability distribution.<|reference_end|>
arxiv
@article{sun2024debiasing, title={Debiasing Federated Learning with Correlated Client Participation}, author={Zhenyu Sun, Ziyang Zhang, Zheng Xu, Gauri Joshi, Pranay Sharma, Ermin Wei}, journal={arXiv preprint arXiv:2410.01209}, year={2024}, archivePrefix={arXiv}, eprint={2410.01209}, primaryClass={cs.LG cs.DC} }
sun2024debiasing
arxiv-664326
2410.01210
Polyp-SES: Automatic Polyp Segmentation with Self-Enriched Semantic Model
<|reference_start|>Polyp-SES: Automatic Polyp Segmentation with Self-Enriched Semantic Model: Automatic polyp segmentation is crucial for effective diagnosis and treatment in colonoscopy images. Traditional methods encounter significant challenges in accurately delineating polyps due to limitations in feature representation and the handling of variability in polyp appearance. Deep learning techniques, including CNN and Transformer-based methods, have been explored to improve polyp segmentation accuracy. However, existing approaches often neglect additional semantics, restricting their ability to acquire adequate contexts of polyps in colonoscopy images. In this paper, we propose an innovative method named ``Automatic Polyp Segmentation with Self-Enriched Semantic Model'' to address these limitations. First, we extract a sequence of features from an input image and decode high-level features to generate an initial segmentation mask. Using the proposed self-enriched semantic module, we query potential semantics and augment deep features with additional semantics, thereby aiding the model in understanding context more effectively. Extensive experiments show superior segmentation performance of the proposed method against state-of-the-art polyp segmentation baselines across five polyp benchmarks in both superior learning and generalization capabilities.<|reference_end|>
arxiv
@article{nguyen2024polyp-ses:, title={Polyp-SES: Automatic Polyp Segmentation with Self-Enriched Semantic Model}, author={Quang Vinh Nguyen, Thanh Hoang Son Vo, Sae-Ryung Kang, Soo-Hyung Kim}, journal={arXiv preprint arXiv:2410.01210}, year={2024}, archivePrefix={arXiv}, eprint={2410.01210}, primaryClass={cs.CV cs.AI} }
nguyen2024polyp-ses:
arxiv-664327
2410.01212
Absolute State-wise Constrained Policy Optimization: High-Probability State-wise Constraints Satisfaction
<|reference_start|>Absolute State-wise Constrained Policy Optimization: High-Probability State-wise Constraints Satisfaction: Enforcing state-wise safety constraints is critical for the application of reinforcement learning (RL) in real-world problems, such as autonomous driving and robot manipulation. However, existing safe RL methods only enforce state-wise constraints in expectation or enforce hard state-wise constraints with strong assumptions. The former does not exclude the probability of safety violations, while the latter is impractical. Our insight is that although it is intractable to guarantee hard state-wise constraints in a model-free setting, we can enforce state-wise safety with high probability while excluding strong assumptions. To accomplish the goal, we propose Absolute State-wise Constrained Policy Optimization (ASCPO), a novel general-purpose policy search algorithm that guarantees high-probability state-wise constraint satisfaction for stochastic systems. We demonstrate the effectiveness of our approach by training neural network policies for extensive robot locomotion tasks, where the agent must adhere to various state-wise safety constraints. Our results show that ASCPO significantly outperforms existing methods in handling state-wise constraints across challenging continuous control tasks, highlighting its potential for real-world applications.<|reference_end|>
arxiv
@article{zhao2024absolute, title={Absolute State-wise Constrained Policy Optimization: High-Probability State-wise Constraints Satisfaction}, author={Weiye Zhao, Feihan Li, Yifan Sun, Yujie Wang, Rui Chen, Tianhao Wei, Changliu Liu}, journal={arXiv preprint arXiv:2410.01212}, year={2024}, archivePrefix={arXiv}, eprint={2410.01212}, primaryClass={cs.LG} }
zhao2024absolute
arxiv-664328
2410.01213
A versatile machine learning workflow for high-throughput analysis of supported metal catalyst particles
<|reference_start|>A versatile machine learning workflow for high-throughput analysis of supported metal catalyst particles: Accurate and efficient characterization of nanoparticles (NPs), particularly regarding particle size distribution, is essential for advancing our understanding of their structure-property relationships and facilitating their design for various applications. In this study, we introduce a novel two-stage artificial intelligence (AI)-driven workflow for NP analysis that leverages prompt engineering techniques from state-of-the-art single-stage object detection and large-scale vision transformer (ViT) architectures. This methodology was applied to transmission electron microscopy (TEM) and scanning TEM (STEM) images of heterogeneous catalysts, enabling high-resolution, high-throughput analysis of particle size distributions for supported metal catalysts. The model's performance in detecting and segmenting NPs was validated across diverse heterogeneous catalyst systems, including various metals (Cu, Ru, Pt, and PtCo), supports (silica ($\text{SiO}_2$), $\gamma$-alumina ($\gamma$-$\text{Al}_2\text{O}_3$), and carbon black), and particle diameter size distributions with means and standard deviations of 2.9 $\pm$ 1.1 nm, 1.6 $\pm$ 0.2 nm, 9.7 $\pm$ 4.6 nm, and 4 $\pm$ 1.0 nm. Additionally, the proposed machine learning (ML) approach successfully detects and segments overlapping NPs anchored on non-uniform catalytic support materials, providing critical insights into their spatial arrangements and interactions. Our AI-assisted NP analysis workflow demonstrates robust generalization across diverse datasets and can be readily applied to similar NP segmentation tasks without requiring costly model retraining.<|reference_end|>
arxiv
@article{genc2024a, title={A versatile machine learning workflow for high-throughput analysis of supported metal catalyst particles}, author={Arda Genc, Justin Marlowe, Anika Jalil, Libor Kovarik, Phillip Christopher}, journal={arXiv preprint arXiv:2410.01213}, year={2024}, archivePrefix={arXiv}, eprint={2410.01213}, primaryClass={cond-mat.mtrl-sci cs.AI} }
genc2024a
arxiv-664329
2410.01215
From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging
<|reference_start|>From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging: While large language models have made significant strides in code generation, the pass rate of the generated code is bottlenecked on subtle errors, often requiring human intervention to pass tests, especially for complex problems. Existing LLM-based debugging systems treat generated programs as monolithic units, failing to address bugs at multiple levels of granularity, from low-level syntax errors to high-level algorithmic flaws. In this paper, we introduce Multi-Granularity Debugger (MGDebugger), a hierarchical code debugger by isolating, identifying, and resolving bugs at various levels of granularity. MGDebugger decomposes problematic code into a hierarchical tree structure of subfunctions, with each level representing a particular granularity of error. During debugging, it analyzes each subfunction and iteratively resolves bugs in a bottom-up manner. To effectively test each subfunction, we propose an LLM-simulated Python executor, which traces code execution and tracks important variable states to pinpoint errors accurately. Extensive experiments demonstrate that MGDebugger outperforms existing debugging systems, achieving an 18.9% improvement in accuracy over seed generations in HumanEval and a 97.6% repair success rate in HumanEvalFix. Furthermore, MGDebugger effectively fixes bugs across different categories and difficulty levels, demonstrating its robustness and effectiveness.<|reference_end|>
arxiv
@article{shi2024from, title={From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging}, author={Yuling Shi, Songsong Wang, Chengcheng Wan, Xiaodong Gu}, journal={arXiv preprint arXiv:2410.01215}, year={2024}, archivePrefix={arXiv}, eprint={2410.01215}, primaryClass={cs.CL cs.AI cs.PL cs.SE} }
shi2024from
arxiv-664330
2410.01216
RS-FME-SwinT: A Novel Feature Map Enhancement Framework Integrating Customized SwinT with Residual and Spatial CNN for Monkeypox Diagnosis
<|reference_start|>RS-FME-SwinT: A Novel Feature Map Enhancement Framework Integrating Customized SwinT with Residual and Spatial CNN for Monkeypox Diagnosis: Monkeypox (MPox) has emerged as a significant global concern, with cases steadily increasing daily. Conventional detection methods, including polymerase chain reaction (PCR) and manual examination, exhibit challenges of low sensitivity, high cost, and substantial workload. Therefore, deep learning offers an automated solution; however, the datasets include data scarcity, texture, contrast, inter-intra class variability, and similarities with other skin infectious diseases. In this regard, a novel hybrid approach is proposed that integrates the learning capacity of Residual Learning and Spatial Exploitation Convolutional Neural Network (CNN) with a customized Swin Transformer (RS-FME-SwinT) to capture multi-scale global and local correlated features for MPox diagnosis. The proposed RS-FME-SwinT technique employs a transfer learning-based feature map enhancement (FME) technique, integrating the customized SwinT for global information capture, residual blocks for texture extraction, and spatial blocks for local contrast variations. Moreover, incorporating new inverse residual blocks within the proposed SwinT effectively captures local patterns and mitigates vanishing gradients. The proposed RS-FME-SwinT has strong learning potential of diverse features that systematically reduce intra-class MPox variation and enable precise discrimination from other skin diseases. Finally, the proposed RS-FME-SwinT is a holdout cross-validated on a diverse MPox dataset and achieved outperformance on state-of-the-art CNNs and ViTs. The proposed RS-FME-SwinT demonstrates commendable results of an accuracy of 97.80%, sensitivity of 96.82%, precision of 98.06%, and an F-score of 97.44% in MPox detection. The RS-FME-SwinT could be a valuable tool for healthcare practitioners, enabling prompt and accurate MPox diagnosis and contributing significantly to mitigation efforts.<|reference_end|>
arxiv
@article{khan2024rs-fme-swint:, title={RS-FME-SwinT: A Novel Feature Map Enhancement Framework Integrating Customized SwinT with Residual and Spatial CNN for Monkeypox Diagnosis}, author={Saddam Hussain Khan, Rashid Iqbal (Artificial Intelligence Lab, Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, Pakistan)}, journal={arXiv preprint arXiv:2410.01216}, year={2024}, archivePrefix={arXiv}, eprint={2410.01216}, primaryClass={eess.IV cs.AI cs.CV} }
khan2024rs-fme-swint:
arxiv-664331
2410.01218
An uncertainty-aware Digital Shadow for underground multimodal CO2 storage monitoring
<|reference_start|>An uncertainty-aware Digital Shadow for underground multimodal CO2 storage monitoring: Geological Carbon Storage GCS is arguably the only scalable net-negative CO2 emission technology available While promising subsurface complexities and heterogeneity of reservoir properties demand a systematic approach to quantify uncertainty when optimizing production and mitigating storage risks which include assurances of Containment and Conformance of injected supercritical CO2 As a first step towards the design and implementation of a Digital Twin for monitoring underground storage operations a machine learning based data-assimilation framework is introduced and validated on carefully designed realistic numerical simulations As our implementation is based on Bayesian inference but does not yet support control and decision-making we coin our approach an uncertainty-aware Digital Shadow To characterize the posterior distribution for the state of CO2 plumes conditioned on multi-modal time-lapse data the envisioned Shadow combines techniques from Simulation-Based Inference SBI and Ensemble Bayesian Filtering to establish probabilistic baselines and assimilate multi-modal data for GCS problems that are challenged by large degrees of freedom nonlinear multi-physics non-Gaussianity and computationally expensive to evaluate fluid flow and seismic simulations To enable SBI for dynamic systems a recursive scheme is proposed where the Digital Shadows neural networks are trained on simulated ensembles for their state and observed data well and/or seismic Once training is completed the systems state is inferred when time-lapse field data becomes available In this computational study we observe that a lack of knowledge on the permeability field can be factored into the Digital Shadows uncertainty quantification To our knowledge this work represents the first proof of concept of an uncertainty-aware in-principle scalable Digital Shadow.<|reference_end|>
arxiv
@article{gahlot2024an, title={An uncertainty-aware Digital Shadow for underground multimodal CO2 storage monitoring}, author={Abhinav Prakash Gahlot and Rafael Orozco and Ziyi Yin and Felix J. Herrmann}, journal={arXiv preprint arXiv:2410.01218}, year={2024}, archivePrefix={arXiv}, eprint={2410.01218}, primaryClass={physics.geo-ph cs.AI cs.LG physics.comp-ph} }
gahlot2024an
arxiv-664332
2410.01220
Effective Tuning Strategies for Generalist Robot Manipulation Policies
<|reference_start|>Effective Tuning Strategies for Generalist Robot Manipulation Policies: Generalist robot manipulation policies (GMPs) have the potential to generalize across a wide range of tasks, devices, and environments. However, existing policies continue to struggle with out-of-distribution scenarios due to the inherent difficulty of collecting sufficient action data to cover extensively diverse domains. While fine-tuning offers a practical way to quickly adapt a GMPs to novel domains and tasks with limited samples, we observe that the performance of the resulting GMPs differs significantly with respect to the design choices of fine-tuning strategies. In this work, we first conduct an in-depth empirical study to investigate the effect of key factors in GMPs fine-tuning strategies, covering the action space, policy head, supervision signal and the choice of tunable parameters, where 2,500 rollouts are evaluated for a single configuration. We systematically discuss and summarize our findings and identify the key design choices, which we believe give a practical guideline for GMPs fine-tuning. We observe that in a low-data regime, with carefully chosen fine-tuning strategies, a GMPs significantly outperforms the state-of-the-art imitation learning algorithms. The results presented in this work establish a new baseline for future studies on fine-tuned GMPs, and provide a significant addition to the GMPs toolbox for the community.<|reference_end|>
arxiv
@article{zhang2024effective, title={Effective Tuning Strategies for Generalist Robot Manipulation Policies}, author={Wenbo Zhang, Yang Li, Yanyuan Qiao, Siyuan Huang, Jiajun Liu, Feras Dayoub, Xiao Ma, Lingqiao Liu}, journal={arXiv preprint arXiv:2410.01220}, year={2024}, archivePrefix={arXiv}, eprint={2410.01220}, primaryClass={cs.RO cs.LG} }
zhang2024effective
arxiv-664333
2410.01221
Induced Covariance for Causal Discovery in Linear Sparse Structures
<|reference_start|>Induced Covariance for Causal Discovery in Linear Sparse Structures: Causal models seek to unravel the cause-effect relationships among variables from observed data, as opposed to mere mappings among them, as traditional regression models do. This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships. In such scenarios, the causal links represented by directed acyclic graphs (DAGs) can be encapsulated in a structural matrix. The proposed approach leverages the structural matrix's ability to reconstruct data and the statistical properties it imposes on the data to identify the correct structural matrix. This method does not rely on independence tests or graph fitting procedures, making it suitable for scenarios with limited training data. Simulation results demonstrate that the proposed method outperforms the well-known PC, GES, BIC exact search, and LINGAM-based methods in recovering linearly sparse causal structures.<|reference_end|>
arxiv
@article{mohseni-sehdeh2024induced, title={Induced Covariance for Causal Discovery in Linear Sparse Structures}, author={Saeed Mohseni-Sehdeh, Walid Saad}, journal={arXiv preprint arXiv:2410.01221}, year={2024}, archivePrefix={arXiv}, eprint={2410.01221}, primaryClass={cs.LG stat.ML} }
mohseni-sehdeh2024induced
arxiv-664334
2410.01222
Exploring Fine-grained Task Parallelism on Simultaneous Multithreading Cores
<|reference_start|>Exploring Fine-grained Task Parallelism on Simultaneous Multithreading Cores: Nowadays, latency-critical, high-performance applications are parallelized even on power-constrained client systems to improve performance. However, an important scenario of fine-grained tasking on simultaneous multithreading CPU cores in such systems has not been well researched in previous works. Hence, in this paper, we conduct performance analysis of state-of-the-art shared-memory parallel programming frameworks on simultaneous multithreading cores using real-world fine-grained application kernels. We introduce a specialized and simple software-only parallel programming framework called Relic to enable extremely fine-grained tasking on simultaneous multithreading cores. Using Relic framework, we increase performance speedups over serial implementations of benchmark kernels by 19.1% compared to LLVM OpenMP, by 31.0% compared to GNU OpenMP, by 20.2% compared to Intel OpenMP, by 33.2% compared to X-OpenMP, by 30.1% compared to oneTBB, by 23.0% compared to Taskflow, and by 21.4% compared to OpenCilk.<|reference_end|>
arxiv
@article{los2024exploring, title={Exploring Fine-grained Task Parallelism on Simultaneous Multithreading Cores}, author={Denis Los, Igor Petushkov}, journal={International Journal of Open Information Technologies, vol. 12, no. 10, pp. 144-151, 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.01222}, primaryClass={cs.DC} }
los2024exploring
arxiv-664335
2410.01223
Statistical Taylor Expansion
<|reference_start|>Statistical Taylor Expansion: Statistical Taylor expansion replaces the input precise variables in a conventional Taylor expansion with random variables each with known distribution, to calculate the result mean and deviation. It is based on the uncorrelated uncertainty assumption: Each input variable is measured independently with fine enough statistical precision, so that their uncertainties are independent of each other. Statistical Taylor expansion reviews that the intermediate analytic expressions can no longer be regarded as independent of each other, and the result of analytic expression should be path independent. This conclusion differs fundamentally from the conventional common approach in applied mathematics to find the best execution path for a result. This paper also presents an implementation of statistical Taylor expansion called variance arithmetic, and the tests on variance arithmetic.<|reference_end|>
arxiv
@article{wang2024statistical, title={Statistical Taylor Expansion}, author={Chengpu Wang}, journal={arXiv preprint arXiv:2410.01223}, year={2024}, archivePrefix={arXiv}, eprint={2410.01223}, primaryClass={stat.CO cs.LG} }
wang2024statistical
arxiv-664336
2410.01225
Perceptual Piercing: Human Visual Cue-based Object Detection in Low Visibility Conditions
<|reference_start|>Perceptual Piercing: Human Visual Cue-based Object Detection in Low Visibility Conditions: This study proposes a novel deep learning framework inspired by atmospheric scattering and human visual cortex mechanisms to enhance object detection under poor visibility scenarios such as fog, smoke, and haze. These conditions pose significant challenges for object recognition, impacting various sectors, including autonomous driving, aviation management, and security systems. The objective is to enhance the precision and reliability of detection systems under adverse environmental conditions. The research investigates the integration of human-like visual cues, particularly focusing on selective attention and environmental adaptability, to ascertain their impact on object detection's computational efficiency and accuracy. This paper proposes a multi-tiered strategy that integrates an initial quick detection process, followed by targeted region-specific dehazing, and concludes with an in-depth detection phase. The approach is validated using the Foggy Cityscapes, RESIDE-beta (OTS and RTTS) datasets and is anticipated to set new performance standards in detection accuracy while significantly optimizing computational efficiency. The findings offer a viable solution for enhancing object detection in poor visibility and contribute to the broader understanding of integrating human visual principles into deep learning algorithms for intricate visual recognition challenges.<|reference_end|>
arxiv
@article{kumar2024perceptual, title={Perceptual Piercing: Human Visual Cue-based Object Detection in Low Visibility Conditions}, author={Ashutosh Kumar}, journal={arXiv preprint arXiv:2410.01225}, year={2024}, archivePrefix={arXiv}, eprint={2410.01225}, primaryClass={cs.CV} }
kumar2024perceptual
arxiv-664337
2410.01226
Towards Native Generative Model for 3D Head Avatar
<|reference_start|>Towards Native Generative Model for 3D Head Avatar: Creating 3D head avatars is a significant yet challenging task for many applicated scenarios. Previous studies have set out to learn 3D human head generative models using massive 2D image data. Although these models are highly generalizable for human appearance, their result models are not 360$^\circ$-renderable, and the predicted 3D geometry is unreliable. Therefore, such results cannot be used in VR, game modeling, and other scenarios that require 360$^\circ$-renderable 3D head models. An intuitive idea is that 3D head models with limited amount but high 3D accuracy are more reliable training data for a high-quality 3D generative model. In this vein, we delve into how to learn a native generative model for 360$^\circ$ full head from a limited 3D head dataset. Specifically, three major problems are studied: 1) how to effectively utilize various representations for generating the 360$^\circ$-renderable human head; 2) how to disentangle the appearance, shape, and motion of human faces to generate a 3D head model that can be edited by appearance and driven by motion; 3) and how to extend the generalization capability of the generative model to support downstream tasks. Comprehensive experiments are conducted to verify the effectiveness of the proposed model. We hope the proposed models and artist-designed dataset can inspire future research on learning native generative 3D head models from limited 3D datasets.<|reference_end|>
arxiv
@article{zhuang2024towards, title={Towards Native Generative Model for 3D Head Avatar}, author={Yiyu Zhuang, Yuxiao He, Jiawei Zhang, Yanwen Wang, Jiahe Zhu, Yao Yao, Siyu Zhu, Xun Cao, Hao Zhu}, journal={arXiv preprint arXiv:2410.01226}, year={2024}, archivePrefix={arXiv}, eprint={2410.01226}, primaryClass={cs.CV} }
zhuang2024towards
arxiv-664338
2410.01227
See Me and Believe Me: Causality and Intersectionality in Testimonial Injustice in Healthcare
<|reference_start|>See Me and Believe Me: Causality and Intersectionality in Testimonial Injustice in Healthcare: In medical settings, it is critical that all who are in need of care are correctly heard and understood. When this is not the case due to prejudices a listener has, the speaker is experiencing \emph{testimonial injustice}, which, building upon recent work, we quantify by the presence of several categories of unjust vocabulary in medical notes. In this paper, we use FCI, a causal discovery method, to study the degree to which certain demographic features could lead to marginalization (e.g., age, gender, and race) by way of contributing to testimonial injustice. To achieve this, we review physicians' notes for each patient, where we identify occurrences of unjust vocabulary, along with the demographic features present, and use causal discovery to build a Structural Causal Model (SCM) relating those demographic features to testimonial injustice. We analyze and discuss the resulting SCMs to show the interaction of these factors and how they influence the experience of injustice. Despite the potential presence of some confounding variables, we observe how one contributing feature can make a person more prone to experiencing another contributor of testimonial injustice. There is no single root of injustice and thus intersectionality cannot be ignored. These results call for considering more than singular or equalized attributes of who a person is when analyzing and improving their experiences of bias and injustice. This work is thus a first foray at using causal discovery to understand the nuanced experiences of patients in medical settings, and its insights could be used to guide design principles throughout healthcare, to build trust and promote better patient care.<|reference_end|>
arxiv
@article{andrews2024see, title={See Me and Believe Me: Causality and Intersectionality in Testimonial Injustice in Healthcare}, author={Kenya S. Andrews, Mesrob I. Ohannessian, Elena Zheleva}, journal={arXiv preprint arXiv:2410.01227}, year={2024}, archivePrefix={arXiv}, eprint={2410.01227}, primaryClass={cs.LG cs.AI} }
andrews2024see
arxiv-664339
2410.01228
ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving
<|reference_start|>ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving: Many applications are leveraging large language models (LLMs) for complex tasks, and they generally demand low inference latency and high serving throughput for interactive online jobs such as chatbots. However, the tight latency requirement and high load variance of applications pose challenges to serving systems in achieving high GPU utilization. Due to the high costs of scheduling and preemption, today's systems generally use separate clusters to serve online and offline inference tasks, and dedicate GPUs for online inferences to avoid interference. This approach leads to underutilized GPUs because one must reserve enough GPU resources for the peak expected load, even if the average load is low. This paper proposes to harvest stranded GPU resources for offline LLM inference tasks such as document summarization and LLM benchmarking. Unlike online inferences, these tasks usually run in a batch-processing manner with loose latency requirements, making them a good fit for stranded resources that are only available shortly. To enable safe and efficient GPU harvesting without interfering with online tasks, we built ConServe, an LLM serving system that contains (1) an execution engine that preempts running offline tasks upon the arrival of online tasks, (2) an incremental checkpointing mechanism that minimizes the amount of recomputation required by preemptions, and (3) a scheduler that adaptively batches offline tasks for higher GPU utilization. Our evaluation demonstrates that ConServe achieves strong performance isolation when co-serving online and offline tasks but at a much higher GPU utilization. When colocating practical online and offline workloads on popular models such as Llama-2-7B, ConServe achieves 2.35$\times$ higher throughput than state-of-the-art online serving systems and reduces serving latency by 84$\times$ compared to existing co-serving systems.<|reference_end|>
arxiv
@article{qiao2024conserve:, title={ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving}, author={Yifan Qiao, Shu Anzai, Shan Yu, Haoran Ma, Yang Wang, Miryung Kim, Harry Xu}, journal={arXiv preprint arXiv:2410.01228}, year={2024}, archivePrefix={arXiv}, eprint={2410.01228}, primaryClass={cs.DC cs.LG} }
qiao2024conserve:
arxiv-664340
2410.01230
Towards Efficient Moion Planning for UAVs: Lazy A* Search with Motion Primitives
<|reference_start|>Towards Efficient Moion Planning for UAVs: Lazy A* Search with Motion Primitives: Search-based motion planning algorithms have been widely utilized for unmanned aerial vehicles (UAVs). However, deploying these algorithms on real UAVs faces challenges due to limited onboard computational resources. The algorithms struggle to find solutions in high-dimensional search spaces and require considerable time to ensure that the trajectories are dynamically feasible. This paper incorporates the lazy search concept into search-based planning algorithms to address the critical issue of real-time planning for collision-free and dynamically feasible trajectories on UAVs. We demonstrate that the lazy search motion planning algorithm can efficiently find optimal trajectories and significantly improve computational efficiency.<|reference_end|>
arxiv
@article{wang2024towards, title={Towards Efficient Motion Planning for UAVs: Lazy A* Search with Motion Primitives}, author={Wentao Wang, Yi Shen, Kaiyang Chen, Kaifan Lu}, journal={arXiv preprint arXiv:2410.01230}, year={2024}, archivePrefix={arXiv}, eprint={2410.01230}, primaryClass={cs.RO} }
wang2024towards
arxiv-664341
2410.01231
Revisiting the Index Construction of Proximity Graph-Based Approximate Nearest Neighbor Search
<|reference_start|>Revisiting the Index Construction of Proximity Graph-Based Approximate Nearest Neighbor Search: Proximity graphs (PG) have gained increasing popularity as the state-of-the-art (SOTA) solutions to $k$-approximate nearest neighbor ($k$-ANN) search on high-dimensional data, which serves as a fundamental function in various fields, e.g. information retrieval and retrieval-augmented generation~(RAG). Although PG-based approaches have the best $k$-ANN search performance, their index construction cost is superlinear to the number of points, since they have to identify close neighbors for each point to establish the edges. Such superlinear cost substantially limits their scalability in the era of big data. Hence, the goal of this paper is to accelerate the construction of PG-based methods without compromising their $k$-ANN search performance. To achieve this goal, two mainstream categories of PG are revisited: relative neighborhood graph (RNG) and navigable small world graph (NSWG). By revisiting their construction process, we find the issues of construction efficiency. To address these issues, we propose a new construction framework with a novel pruning strategy for edge selection, which accelerates RNG construction while keeping its $k$-ANN search performance. Then, we integrate this framework into NSWG construction to enhance both the construction efficiency and $k$-ANN search performance. Moreover, extensive experiments are conducted to validate our construction framework for both RNG and NSWG. The results demonstrate that it significantly reduces the PG construction cost, achieving up to 5.6x speedup while not compromising the $k$-ANN search performance.<|reference_end|>
arxiv
@article{yang2024revisiting, title={Revisiting the Index Construction of Proximity Graph-Based Approximate Nearest Neighbor Search}, author={Shuo Yang, Jiadong Xie, Yingfan Liu, Jeffrey Xu Yu, Xiyue Gao, Qianru Wang, Yanguo Peng, Jiangtao Cui}, journal={arXiv preprint arXiv:2410.01231}, year={2024}, archivePrefix={arXiv}, eprint={2410.01231}, primaryClass={cs.DB} }
yang2024revisiting
arxiv-664342
2410.01239
Replacement Learning: Training Vision Tasks with Fewer Learnable Parameters
<|reference_start|>Replacement Learning: Training Vision Tasks with Fewer Learnable Parameters: Traditional end-to-end deep learning models often enhance feature representation and overall performance by increasing the depth and complexity of the network during training. However, this approach inevitably introduces issues of parameter redundancy and resource inefficiency, especially in deeper networks. While existing works attempt to skip certain redundant layers to alleviate these problems, challenges related to poor performance, computational complexity, and inefficient memory usage remain. To address these issues, we propose an innovative training approach called Replacement Learning, which mitigates these limitations by completely replacing all the parameters of the frozen layers with only two learnable parameters. Specifically, Replacement Learning selectively freezes the parameters of certain layers, and the frozen layers utilize parameters from adjacent layers, updating them through a parameter integration mechanism controlled by two learnable parameters. This method leverages information from surrounding structures, reduces computation, conserves GPU memory, and maintains a balance between historical context and new inputs, ultimately enhancing overall model performance. We conducted experiments across four benchmark datasets, including CIFAR-10, STL-10, SVHN, and ImageNet, utilizing various architectures such as CNNs and ViTs to validate the effectiveness of Replacement Learning. Experimental results demonstrate that our approach reduces the number of parameters, training time, and memory consumption while completely surpassing the performance of end-to-end training.<|reference_end|>
arxiv
@article{zhang2024replacement, title={Replacement Learning: Training Vision Tasks with Fewer Learnable Parameters}, author={Yuming Zhang, Peizhe Wang, Shouxin Zhang, Dongzhi Guan, Jiabin Liu and Junhao Su}, journal={arXiv preprint arXiv:2410.01239}, year={2024}, archivePrefix={arXiv}, eprint={2410.01239}, primaryClass={cs.CV} }
zhang2024replacement
arxiv-664343
2410.01240
Automatic deductive coding in discourse analysis: an application of large language models in learning analytics
<|reference_start|>Automatic deductive coding in discourse analysis: an application of large language models in learning analytics: Deductive coding is a common discourse analysis method widely used by learning science and learning analytics researchers for understanding teaching and learning interactions. It often requires researchers to manually label all discourses to be analyzed according to a theoretically guided coding scheme, which is time-consuming and labor-intensive. The emergence of large language models such as GPT has opened a new avenue for automatic deductive coding to overcome the limitations of traditional deductive coding. To evaluate the usefulness of large language models in automatic deductive coding, we employed three different classification methods driven by different artificial intelligence technologies, including the traditional text classification method with text feature engineering, BERT-like pretrained language model and GPT-like pretrained large language model (LLM). We applied these methods to two different datasets and explored the potential of GPT and prompt engineering in automatic deductive coding. By analyzing and comparing the accuracy and Kappa values of these three classification methods, we found that GPT with prompt engineering outperformed the other two methods on both datasets with limited number of training samples. By providing detailed prompt structures, the reported work demonstrated how large language models can be used in the implementation of automatic deductive coding.<|reference_end|>
arxiv
@article{zhang2024automatic, title={Automatic deductive coding in discourse analysis: an application of large language models in learning analytics}, author={Lishan Zhang, Han Wu, Xiaoshan Huang, Tengfei Duan, Hanxiang Du}, journal={arXiv preprint arXiv:2410.01240}, year={2024}, archivePrefix={arXiv}, eprint={2410.01240}, primaryClass={cs.CL cs.HC} }
zhang2024automatic
arxiv-664344
2410.01242
RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance
<|reference_start|>RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance: Large Language Models (LLMs) have shown incredible potential in code generation tasks, and recent research in prompt engineering have enhanced LLMs' understanding of textual information. However, ensuring the accuracy of generated code often requires extensive testing and validation by programmers. While LLMs can typically generate code based on task descriptions, their accuracy remains limited, especially for complex tasks that require a deeper understanding of both the problem statement and the code generation process. This limitation is primarily due to the LLMs' need to simultaneously comprehend text and generate syntactically and semantically correct code, without having the capability to automatically refine the code. In real-world software development, programmers rarely produce flawless code in a single attempt based on the task description alone, they rely on iterative feedback and debugging to refine their programs. Inspired by this process, we introduce a novel architecture of LLM-based agents for code generation and automatic debugging: Refinement and Guidance Debugging (RGD). The RGD framework is a multi-LLM-based agent debugger that leverages three distinct LLM agents-Guide Agent, Debug Agent, and Feedback Agent. RGD decomposes the code generation task into multiple steps, ensuring a clearer workflow and enabling iterative code refinement based on self-reflection and feedback. Experimental results demonstrate that RGD exhibits remarkable code generation capabilities, achieving state-of-the-art performance with a 9.8% improvement on the HumanEval dataset and a 16.2% improvement on the MBPP dataset compared to the state-of-the-art approaches and traditional direct prompting approaches. We highlight the effectiveness of the RGD framework in enhancing LLMs' ability to generate and refine code autonomously.<|reference_end|>
arxiv
@article{jin2024rgd:, title={RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance}, author={Haolin Jin, Zechao Sun, Huaming Chen}, journal={arXiv preprint arXiv:2410.01242}, year={2024}, archivePrefix={arXiv}, eprint={2410.01242}, primaryClass={cs.SE cs.AI cs.CL} }
jin2024rgd:
arxiv-664345
2410.01243
An Information Theory of Compute-Optimal Size Scaling, Emergence, and Plateaus in Language Models
<|reference_start|>An Information Theory of Compute-Optimal Size Scaling, Emergence, and Plateaus in Language Models: Recent empirical studies show three phenomena with increasing size of language models: compute-optimal size scaling, emergent capabilities, and performance plateauing. We present a simple unified mathematical framework to explain all of these language model scaling phenomena, building on recent skill-text bipartite graph frameworks for semantic learning. Modeling the learning of concepts from texts as an iterative process yields an analogy to iterative decoding of low-density parity check (LDPC) codes in information theory. Thence, drawing on finite-size scaling characterizations of LDPC decoding, we derive the compute-optimal size scaling (Chinchilla rule) for language models. Further, using tools from random network theory, we provide a simple explanation for both emergence of complex skills and plateauing of performance as the size of language models scale. We see multiple plateaus.<|reference_end|>
arxiv
@article{nayak2024an, title={An Information Theory of Compute-Optimal Size Scaling, Emergence, and Plateaus in Language Models}, author={Anuj K. Nayak, Lav R. Varshney}, journal={arXiv preprint arXiv:2410.01243}, year={2024}, archivePrefix={arXiv}, eprint={2410.01243}, primaryClass={cs.IT math.IT} }
nayak2024an
arxiv-664346
2410.01244
Equivariant score-based generative models provably learn distributions with symmetries efficiently
<|reference_start|>Equivariant score-based generative models provably learn distributions with symmetries efficiently: Symmetry is ubiquitous in many real-world phenomena and tasks, such as physics, images, and molecular simulations. Empirical studies have demonstrated that incorporating symmetries into generative models can provide better generalization and sampling efficiency when the underlying data distribution has group symmetry. In this work, we provide the first theoretical analysis and guarantees of score-based generative models (SGMs) for learning distributions that are invariant with respect to some group symmetry and offer the first quantitative comparison between data augmentation and adding equivariant inductive bias. First, building on recent works on the Wasserstein-1 ($\mathbf{d}_1$) guarantees of SGMs and empirical estimations of probability divergences under group symmetry, we provide an improved $\mathbf{d}_1$ generalization bound when the data distribution is group-invariant. Second, we describe the inductive bias of equivariant SGMs using Hamilton-Jacobi-Bellman theory, and rigorously demonstrate that one can learn the score of a symmetrized distribution using equivariant vector fields without data augmentations through the analysis of the optimality and equivalence of score-matching objectives. This also provides practical guidance that one does not have to augment the dataset as long as the vector field or the neural network parametrization is equivariant. Moreover, we quantify the impact of not incorporating equivariant structure into the score parametrization, by showing that non-equivariant vector fields can yield worse generalization bounds. This can be viewed as a type of model-form error that describes the missing structure of non-equivariant vector fields. Numerical simulations corroborate our analysis and highlight that data augmentations cannot replace the role of equivariant vector fields.<|reference_end|>
arxiv
@article{chen2024equivariant, title={Equivariant score-based generative models provably learn distributions with symmetries efficiently}, author={Ziyu Chen, Markos A. Katsoulakis, Benjamin J. Zhang}, journal={arXiv preprint arXiv:2410.01244}, year={2024}, archivePrefix={arXiv}, eprint={2410.01244}, primaryClass={stat.ML cs.LG} }
chen2024equivariant
arxiv-664347
2410.01246
AHP-Powered LLM Reasoning for Multi-Criteria Evaluation of Open-Ended Responses
<|reference_start|>AHP-Powered LLM Reasoning for Multi-Criteria Evaluation of Open-Ended Responses: Question answering (QA) tasks have been extensively studied in the field of natural language processing (NLP). Answers to open-ended questions are highly diverse and difficult to quantify, and cannot be simply evaluated as correct or incorrect, unlike close-ended questions with definitive answers. While large language models (LLMs) have demonstrated strong capabilities across various tasks, they exhibit relatively weaker performance in evaluating answers to open-ended questions. In this study, we propose a method that leverages LLMs and the analytic hierarchy process (AHP) to assess answers to open-ended questions. We utilized LLMs to generate multiple evaluation criteria for a question. Subsequently, answers were subjected to pairwise comparisons under each criterion with LLMs, and scores for each answer were calculated in the AHP. We conducted experiments on four datasets using both ChatGPT-3.5-turbo and GPT-4. Our results indicate that our approach more closely aligns with human judgment compared to the four baselines. Additionally, we explored the impact of the number of criteria, variations in models, and differences in datasets on the results.<|reference_end|>
arxiv
@article{lu2024ahp-powered, title={AHP-Powered LLM Reasoning for Multi-Criteria Evaluation of Open-Ended Responses}, author={Xiaotian Lu, Jiyi Li, Koh Takeuchi, Hisashi Kashima}, journal={arXiv preprint arXiv:2410.01246}, year={2024}, archivePrefix={arXiv}, eprint={2410.01246}, primaryClass={cs.CL cs.AI} }
lu2024ahp-powered
arxiv-664348
2410.01249
Dual Approximation Policy Optimization
<|reference_start|>Dual Approximation Policy Optimization: We propose Dual Approximation Policy Optimization (DAPO), a framework that incorporates general function approximation into policy mirror descent methods. In contrast to the popular approach of using the $L_2$-norm to measure function approximation errors, DAPO uses the dual Bregman divergence induced by the mirror map for policy projection. This duality framework has both theoretical and practical implications: not only does it achieve fast linear convergence with general function approximation, but it also includes several well-known practical methods as special cases, immediately providing strong convergence guarantees.<|reference_end|>
arxiv
@article{xiong2024dual, title={Dual Approximation Policy Optimization}, author={Zhihan Xiong, Maryam Fazel, Lin Xiao}, journal={arXiv preprint arXiv:2410.01249}, year={2024}, archivePrefix={arXiv}, eprint={2410.01249}, primaryClass={cs.LG} }
xiong2024dual
arxiv-664349
2410.01250
High and Low Resolution Tradeoffs in Roadside Multimodal Sensing
<|reference_start|>High and Low Resolution Tradeoffs in Roadside Multimodal Sensing: Designing roadside sensing for intelligent transportation applications requires balancing cost and performance,especially when choosing between high and low-resolution sensors. The tradeoff is challenging due to sensor heterogeneity,where different sensors produce unique data modalities due to varying physical principles. High-resolution LiDAR offers detailed point cloud, while 4D millimeter-wave radar, despite providing sparser data, delivers velocity information useful for distinguishing objects based on movement patterns. To assess whether reductions in spatial resolution can be compensated by the informational richness of sensors, particularly in recognizing both vehicles and vulnerable road users (VRUs), we propose Residual Fusion Net (ResFusionNet) to fuse multimodal data for 3D object detection. This enables a quantifiable tradeoff between spatial resolution and information richness across different modalities. Furthermore, we introduce a sensor placement algorithm utilizing probabilistic modeling to manage uncertainties in sensor visibility influenced by environmental or human-related factors. Through simulation-assisted ex-ante evaluation on a real-world testbed, our findings show marked marginal gains in detecting VRUs--an average of 16.7% for pedestrians and 11% for cyclists--when merging velocity-encoded radar with LiDAR, compared to LiDAR only configurations. Additionally, experimental results from 300 runs reveal a maximum loss of 11.5% and a average of 5.25% in sensor coverage due to uncertainty factors. These findings underscore the potential of using low spatial resolution but information-rich sensors to enhance detection capabilities for vulnerable road users while highlighting the necessity of thoroughly evaluating sensor modality heterogeneity, traffic participant diversity, and operational uncertainties when making sensor tradeoffs in practical applications.<|reference_end|>
arxiv
@article{ding2024high, title={High and Low Resolution Tradeoffs in Roadside Multimodal Sensing}, author={Shaozu Ding, Yihong Tang, Marco De Vincenzi, Dajiang Suo}, journal={arXiv preprint arXiv:2410.01250}, year={2024}, archivePrefix={arXiv}, eprint={2410.01250}, primaryClass={cs.RO} }
ding2024high
arxiv-664350
2410.01251
Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample
<|reference_start|>Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample: Facial action unit (AU) detection remains a challenging task, due to the subtlety, dynamics, and diversity of AUs. Recently, the prevailing techniques of self-attention and causal inference have been introduced to AU detection. However, most existing methods directly learn self-attention guided by AU detection, or employ common patterns for all AUs during causal intervention. The former often captures irrelevant information in a global range, and the latter ignores the specific causal characteristic of each AU. In this paper, we propose a novel AU detection framework called AC2D by adaptively constraining self-attention weight distribution and causally deconfounding the sample confounder. Specifically, we explore the mechanism of self-attention weight distribution, in which the self-attention weight distribution of each AU is regarded as spatial distribution and is adaptively learned under the constraint of location-predefined attention and the guidance of AU detection. Moreover, we propose a causal intervention module for each AU, in which the bias caused by training samples and the interference from irrelevant AUs are both suppressed. Extensive experiments show that our method achieves competitive performance compared to state-of-the-art AU detection approaches on challenging benchmarks, including BP4D, DISFA, GFT, and BP4D+ in constrained scenarios and Aff-Wild2 in unconstrained scenarios. The code is available at https://github.com/ZhiwenShao/AC2D.<|reference_end|>
arxiv
@article{shao2024facial, title={Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample}, author={Zhiwen Shao, Hancheng Zhu, Yong Zhou, Xiang Xiang, Bing Liu, Rui Yao, and Lizhuang Ma}, journal={arXiv preprint arXiv:2410.01251}, year={2024}, doi={10.1007/s11263-024-02258-6}, archivePrefix={arXiv}, eprint={2410.01251}, primaryClass={cs.CV} }
shao2024facial
arxiv-664351
2410.01252
Resource-efficient equivariant quantum convolutional neural networks
<|reference_start|>Resource-efficient equivariant quantum convolutional neural networks: Equivariant quantum neural networks (QNNs) are promising quantum machine learning models that exploit symmetries to provide potential quantum advantages. Despite theoretical developments in equivariant QNNs, their implementation on near-term quantum devices remains challenging due to limited computational resources. This study proposes a resource-efficient model of equivariant quantum convolutional neural networks (QCNNs) called equivariant split-parallelizing QCNN (sp-QCNN). Using a group-theoretical approach, we encode general symmetries into our model beyond the translational symmetry addressed by previous sp-QCNNs. We achieve this by splitting the circuit at the pooling layer while preserving symmetry. This splitting structure effectively parallelizes QCNNs to improve measurement efficiency in estimating the expectation value of an observable and its gradient by order of the number of qubits. Our model also exhibits high trainability and generalization performance, including the absence of barren plateaus. Numerical experiments demonstrate that the equivariant sp-QCNN can be trained and generalized with fewer measurement resources than a conventional equivariant QCNN in a noisy quantum data classification task. Our results contribute to the advancement of practical quantum machine learning algorithms.<|reference_end|>
arxiv
@article{chinzei2024resource-efficient, title={Resource-efficient equivariant quantum convolutional neural networks}, author={Koki Chinzei, Quoc Hoan Tran, Yasuhiro Endo, Hirotaka Oshima}, journal={arXiv preprint arXiv:2410.01252}, year={2024}, archivePrefix={arXiv}, eprint={2410.01252}, primaryClass={quant-ph cs.LG} }
chinzei2024resource-efficient
arxiv-664352
2410.01256
ParallelSFL: A Novel Split Federated Learning Framework Tackling Heterogeneity Issues
<|reference_start|>ParallelSFL: A Novel Split Federated Learning Framework Tackling Heterogeneity Issues: Mobile devices contribute more than half of the world's web traffic, providing massive and diverse data for powering various federated learning (FL) applications. In order to avoid the communication bottleneck on the parameter server (PS) and accelerate the training of large-scale models on resourceconstraint workers in edge computing (EC) system, we propose a novel split federated learning (SFL) framework, termed ParallelSFL. Concretely, we split an entire model into a bottom submodel and a top submodel, and divide participating workers into multiple clusters, each of which collaboratively performs the SFL training procedure and exchanges entire models with the PS. However, considering the statistical and system heterogeneity in edge systems, it is challenging to arrange suitable workers to specific clusters for efficient model training. To address these challenges, we carefully develop an effective clustering strategy by optimizing a utility function related to training efficiency and model accuracy. Specifically, ParallelSFL partitions workers into different clusters under the heterogeneity restrictions, thereby promoting model accuracy as well as training efficiency. Meanwhile, ParallelSFL assigns diverse and appropriate local updating frequencies for each cluster to further address system heterogeneity. Extensive experiments are conducted on a physical platform with 80 NVIDIA Jetson devices, and the experimental results show that ParallelSFL can reduce the traffic consumption by at least 21%, speed up the model training by at least 1.36x, and improve model accuracy by at least 5% in heterogeneous scenarios, compared to the baselines.<|reference_end|>
arxiv
@article{liao2024parallelsfl:, title={ParallelSFL: A Novel Split Federated Learning Framework Tackling Heterogeneity Issues}, author={Yunming Liao, Yang Xu, Hongli Xu, Zhiwei Yao, Liusheng Huang, Chunming Qiao}, journal={arXiv preprint arXiv:2410.01256}, year={2024}, archivePrefix={arXiv}, eprint={2410.01256}, primaryClass={cs.DC} }
liao2024parallelsfl:
arxiv-664353
2410.01257
HelpSteer2-Preference: Complementing Ratings with Preferences
<|reference_start|>HelpSteer2-Preference: Complementing Ratings with Preferences: Reward models are critical for aligning models to follow instructions, and are typically trained following one of two popular paradigms: Bradley-Terry style or Regression style. However, there is a lack of evidence that either approach is better than the other, when adequately matched for data. This is primarily because these approaches require data collected in different (but incompatible) formats, meaning that adequately matched data is not available in existing public datasets. To tackle this problem, we release preference annotations (designed for Bradley-Terry training) to complement existing ratings (designed for Regression style training) in the HelpSteer2 dataset. To improve data interpretability, preference annotations are accompanied with human-written justifications. Using this data, we conduct the first head-to-head comparison of Bradley-Terry and Regression models when adequately matched for data. Based on insights derived from such a comparison, we propose a novel approach to combine Bradley-Terry and Regression reward modeling. A Llama-3.1-70B-Instruct model tuned with this approach scores 94.1 on RewardBench, emerging top of more than 140 reward models as of 1 Oct 2024. We also demonstrate the effectiveness of this reward model at aligning models to follow instructions in RLHF. We open-source this dataset (CC-BY-4.0 license) at https://huggingface.co/datasets/nvidia/HelpSteer2 and openly release the trained Reward Model at https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward<|reference_end|>
arxiv
@article{wang2024helpsteer2-preference:, title={HelpSteer2-Preference: Complementing Ratings with Preferences}, author={Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, Yi Dong}, journal={arXiv preprint arXiv:2410.01257}, year={2024}, archivePrefix={arXiv}, eprint={2410.01257}, primaryClass={cs.LG cs.AI cs.CL} }
wang2024helpsteer2-preference:
arxiv-664354
2410.01259
Revisiting Optimism and Model Complexity in the Wake of Overparameterized Machine Learning
<|reference_start|>Revisiting Optimism and Model Complexity in the Wake of Overparameterized Machine Learning: Common practice in modern machine learning involves fitting a large number of parameters relative to the number of observations. These overparameterized models can exhibit surprising generalization behavior, e.g., ``double descent'' in the prediction error curve when plotted against the raw number of model parameters, or another simplistic notion of complexity. In this paper, we revisit model complexity from first principles, by first reinterpreting and then extending the classical statistical concept of (effective) degrees of freedom. Whereas the classical definition is connected to fixed-X prediction error (in which prediction error is defined by averaging over the same, nonrandom covariate points as those used during training), our extension of degrees of freedom is connected to random-X prediction error (in which prediction error is averaged over a new, random sample from the covariate distribution). The random-X setting more naturally embodies modern machine learning problems, where highly complex models, even those complex enough to interpolate the training data, can still lead to desirable generalization performance under appropriate conditions. We demonstrate the utility of our proposed complexity measures through a mix of conceptual arguments, theory, and experiments, and illustrate how they can be used to interpret and compare arbitrary prediction models.<|reference_end|>
arxiv
@article{patil2024revisiting, title={Revisiting Optimism and Model Complexity in the Wake of Overparameterized Machine Learning}, author={Pratik Patil, Jin-Hong Du, Ryan J. Tibshirani}, journal={arXiv preprint arXiv:2410.01259}, year={2024}, archivePrefix={arXiv}, eprint={2410.01259}, primaryClass={stat.ML cs.LG math.ST stat.TH} }
patil2024revisiting
arxiv-664355
2410.01260
Automated Curvy Waveguide Routing for Large-Scale Photonic Integrated Circuits
<|reference_start|>Automated Curvy Waveguide Routing for Large-Scale Photonic Integrated Circuits: As photonic integrated circuit (PIC) designs advance and grow in complexity, largely driven by innovations in photonic computing and interconnects, traditional manual physical design processes have become increasingly cumbersome. Available PIC layout automation tools are mostly schematic-driven, which has not alleviated the burden of manual waveguide planning and layout drawing for engineers. Previous research in automated PIC routing largely relies on off-the-shelf algorithms designed for electrical circuits, which only support high-level route planning to minimize waveguide crossings. It is not customized to handle unique photonics-specific routing constraints and metrics, such as curvy waveguides, bending, port alignment, and insertion loss. These approaches struggle with large-scale PICs and cannot produce real layout geometries without design-rule violations (DRVs). This highlights the pressing need for electronic-photonic design automation (EPDA) tools that can streamline the physical design of modern PICs. In this paper, for the first time, we propose an open-source automated PIC detailed routing tool, dubbed APR, to generate DRV-free PIC layout for large-scale real-world PICs. APR features a grid-based curvy-aware A* engine with adaptive crossing insertion, congestion-aware net ordering and objective, and crossing-waveguide optimization scheme, all tailored to the unique property of PIC. On large-scale real-world photonic computing cores and interconnects, APR generates a DRV-free layout with 14% lower insertion loss and 6.25x speedup than prior methods, paving the way for future advancements in the EPDA toolchain. Our codes are open-sourced at https://github.com/ScopeX-ASU/APR.<|reference_end|>
arxiv
@article{zhou2024automated, title={Automated Curvy Waveguide Routing for Large-Scale Photonic Integrated Circuits}, author={Hongjian Zhou, Keren Zhu, Jiaqi Gu}, journal={arXiv preprint arXiv:2410.01260}, year={2024}, archivePrefix={arXiv}, eprint={2410.01260}, primaryClass={cs.ET physics.optics} }
zhou2024automated
arxiv-664356
2410.01261
OCC-MLLM:Empowering Multimodal Large Language Model For the Understanding of Occluded Objects
<|reference_start|>OCC-MLLM:Empowering Multimodal Large Language Model For the Understanding of Occluded Objects: There is a gap in the understanding of occluded objects in existing large-scale visual language multi-modal models. Current state-of-the-art multimodal models fail to provide satisfactory results in describing occluded objects for visual-language multimodal models through universal visual encoders. Another challenge is the limited number of datasets containing image-text pairs with a large number of occluded objects. Therefore, we introduce a novel multimodal model that applies a newly designed visual encoder to understand occluded objects in RGB images. We also introduce a large-scale visual-language pair dataset for training large-scale visual-language multimodal models and understanding occluded objects. We start our experiments comparing with the state-of-the-art models.<|reference_end|>
arxiv
@article{qiu2024occ-mllm:empowering, title={OCC-MLLM:Empowering Multimodal Large Language Model For the Understanding of Occluded Objects}, author={Wenmo Qiu, Xinhan Di}, journal={arXiv preprint arXiv:2410.01261}, year={2024}, archivePrefix={arXiv}, eprint={2410.01261}, primaryClass={cs.CV} }
qiu2024occ-mllm:empowering
arxiv-664357
2410.01262
Aggregation of Multi Diffusion Models for Enhancing Learned Representations
<|reference_start|>Aggregation of Multi Diffusion Models for Enhancing Learned Representations: Diffusion models have achieved remarkable success in image generation, particularly with the various applications of classifier-free guidance conditional diffusion models. While many diffusion models perform well when controlling for particular aspect among style, character, and interaction, they struggle with fine-grained control due to dataset limitations and intricate model architecture design. This paper introduces a novel algorithm, Aggregation of Multi Diffusion Models (AMDM), which synthesizes features from multiple diffusion models into a specified model, enhancing its learned representations to activate specific features for fine-grained control. AMDM consists of two key components: spherical aggregation and manifold optimization. Spherical aggregation merges intermediate variables from different diffusion models with minimal manifold deviation, while manifold optimization refines these variables to align with the intermediate data manifold, enhancing sampling quality. Experimental results demonstrate that AMDM significantly improves fine-grained control without additional training or inference time, proving its effectiveness. Additionally, it reveals that diffusion models initially focus on features such as position, attributes, and style, with later stages improving generation quality and consistency. AMDM offers a new perspective for tackling the challenges of fine-grained conditional control generation in diffusion models: We can fully utilize existing conditional diffusion models that control specific aspects, or develop new ones, and then aggregate them using the AMDM algorithm. This eliminates the need for constructing complex datasets, designing intricate model architectures, and incurring high training costs. Code is available at: https://github.com/Hammour-steak/AMDM<|reference_end|>
arxiv
@article{yue2024aggregation, title={Aggregation of Multi Diffusion Models for Enhancing Learned Representations}, author={Conghan Yue, Zhengwei Peng, Shiyan Du, Zhi Ji, Dongyu Zhang}, journal={arXiv preprint arXiv:2410.01262}, year={2024}, archivePrefix={arXiv}, eprint={2410.01262}, primaryClass={cs.CV cs.LG} }
yue2024aggregation
arxiv-664358
2410.01264
Backdooring Vision-Language Models with Out-Of-Distribution Data
<|reference_start|>Backdooring Vision-Language Models with Out-Of-Distribution Data: The emergence of Vision-Language Models (VLMs) represents a significant advancement in integrating computer vision with Large Language Models (LLMs) to generate detailed text descriptions from visual inputs. Despite their growing importance, the security of VLMs, particularly against backdoor attacks, is under explored. Moreover, prior works often assume attackers have access to the original training data, which is often unrealistic. In this paper, we address a more practical and challenging scenario where attackers must rely solely on Out-Of-Distribution (OOD) data. We introduce VLOOD (Backdooring Vision-Language Models with Out-of-Distribution Data), a novel approach with two key contributions: (1) demonstrating backdoor attacks on VLMs in complex image-to-text tasks while minimizing degradation of the original semantics under poisoned inputs, and (2) proposing innovative techniques for backdoor injection without requiring any access to the original training data. Our evaluation on image captioning and visual question answering (VQA) tasks confirms the effectiveness of VLOOD, revealing a critical security vulnerability in VLMs and laying the foundation for future research on securing multimodal models against sophisticated threats.<|reference_end|>
arxiv
@article{lyu2024backdooring, title={Backdooring Vision-Language Models with Out-Of-Distribution Data}, author={Weimin Lyu, Jiachen Yao, Saumya Gupta, Lu Pang, Tao Sun, Lingjie Yi, Lijie Hu, Haibin Ling, Chao Chen}, journal={arXiv preprint arXiv:2410.01264}, year={2024}, archivePrefix={arXiv}, eprint={2410.01264}, primaryClass={cs.CV} }
lyu2024backdooring
arxiv-664359
2410.01265
Transformers Handle Endogeneity in In-Context Linear Regression
<|reference_start|>Transformers Handle Endogeneity in In-Context Linear Regression: We explore the capability of transformers to address endogeneity in in-context linear regression. Our main finding is that transformers inherently possess a mechanism to handle endogeneity effectively using instrumental variables (IV). First, we demonstrate that the transformer architecture can emulate a gradient-based bi-level optimization procedure that converges to the widely used two-stage least squares $(\textsf{2SLS})$ solution at an exponential rate. Next, we propose an in-context pretraining scheme and provide theoretical guarantees showing that the global minimizer of the pre-training loss achieves a small excess loss. Our extensive experiments validate these theoretical findings, showing that the trained transformer provides more robust and reliable in-context predictions and coefficient estimates than the $\textsf{2SLS}$ method, in the presence of endogeneity.<|reference_end|>
arxiv
@article{liang2024transformers, title={Transformers Handle Endogeneity in In-Context Linear Regression}, author={Haodong Liang, Krishnakumar Balasubramanian, Lifeng Lai}, journal={arXiv preprint arXiv:2410.01265}, year={2024}, archivePrefix={arXiv}, eprint={2410.01265}, primaryClass={stat.ML cs.AI cs.LG econ.EM math.ST stat.TH} }
liang2024transformers
arxiv-664360
2410.01268
Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Unveiling AI's Potential Through Tools, Techniques, and Applications
<|reference_start|>Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Unveiling AI's Potential Through Tools, Techniques, and Applications: This book serves as an introduction to deep learning and machine learning, focusing on their applications in big data analytics. It covers essential concepts, tools like ChatGPT and Claude, hardware recommendations, and practical guidance on setting up development environments using libraries like PyTorch and TensorFlow. Designed for beginners and advanced users alike, it provides step-by-step instructions, hands-on projects, and insights into AI's future, including AutoML and edge computing.<|reference_end|>
arxiv
@article{feng2024deep, title={Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Unveiling AI's Potential Through Tools, Techniques, and Applications}, author={Pohsun Feng, Ziqian Bi, Yizhu Wen, Xuanhe Pan, Benji Peng, Ming Liu, Jiawei Xu, Keyu Chen, Junyu Liu, Caitlyn Heqi Yin, Sen Zhang, Jinlang Wang, Qian Niu, Ming Li, Tianyang Wang}, journal={arXiv preprint arXiv:2410.01268}, year={2024}, archivePrefix={arXiv}, eprint={2410.01268}, primaryClass={cs.CL cs.LG} }
feng2024deep
arxiv-664361
2410.01270
Panopticus: Omnidirectional 3D Object Detection on Resource-constrained Edge Devices
<|reference_start|>Panopticus: Omnidirectional 3D Object Detection on Resource-constrained Edge Devices: 3D object detection with omnidirectional views enables safety-critical applications such as mobile robot navigation. Such applications increasingly operate on resource-constrained edge devices, facilitating reliable processing without privacy concerns or network delays. To enable cost-effective deployment, cameras have been widely adopted as a low-cost alternative to LiDAR sensors. However, the compute-intensive workload to achieve high performance of camera-based solutions remains challenging due to the computational limitations of edge devices. In this paper, we present Panopticus, a carefully designed system for omnidirectional and camera-based 3D detection on edge devices. Panopticus employs an adaptive multi-branch detection scheme that accounts for spatial complexities. To optimize the accuracy within latency limits, Panopticus dynamically adjusts the model's architecture and operations based on available edge resources and spatial characteristics. We implemented Panopticus on three edge devices and conducted experiments across real-world environments based on the public self-driving dataset and our mobile 360{\deg} camera dataset. Experiment results showed that Panopticus improves accuracy by 62% on average given the strict latency objective of 33ms. Also, Panopticus achieves a 2.1{\times} latency reduction on average compared to baselines.<|reference_end|>
arxiv
@article{lee2024panopticus:, title={Panopticus: Omnidirectional 3D Object Detection on Resource-constrained Edge Devices}, author={Jeho Lee, Chanyoung Jung, Jiwon Kim, Hojung Cha}, journal={arXiv preprint arXiv:2410.01270}, year={2024}, archivePrefix={arXiv}, eprint={2410.01270}, primaryClass={cs.CV cs.SY eess.SY} }
lee2024panopticus:
arxiv-664362
2410.01272
"No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks
<|reference_start|>"No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks: Recent studies have exposed that GNNs are vulnerable to several adversarial attacks, among which backdoor attack is one of the toughest. Similar to Deep Neural Networks (DNNs), backdoor attacks in GNNs lie in the fact that the attacker modifies a portion of graph data by embedding triggers and enforces the model to learn the trigger feature during the model training process. Despite the massive prior backdoor defense works on DNNs, defending against backdoor attacks in GNNs is largely unexplored, severely hindering the widespread application of GNNs in real-world tasks. To bridge this gap, we present GCleaner, the first backdoor mitigation method on GNNs. GCleaner can mitigate the presence of the backdoor logic within backdoored GNNs by reversing the backdoor learning procedure, aiming to restore the model performance to a level similar to that is directly trained on the original clean dataset. To achieve this objective, we ask: How to recover universal and hard backdoor triggers in GNNs? How to unlearn the backdoor trigger feature while maintaining the model performance? We conduct the graph trigger recovery via the explanation method to identify optimal trigger locations, facilitating the search of universal and hard backdoor triggers in the feature space of the backdoored model through maximal similarity. Subsequently, we introduce the backdoor unlearning mechanism, which combines knowledge distillation and gradient-based explainable knowledge for fine-grained backdoor erasure. Extensive experimental evaluations on four benchmark datasets demonstrate that GCleaner can reduce the backdoor attack success rate to 10% with only 1% of clean data, and has almost negligible degradation in model performance, which far outperforms the state-of-the-art (SOTA) defense methods.<|reference_end|>
arxiv
@article{zhang2024"no, title={"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning}, author={Jiale Zhang, Chengcheng Zhu, Bosen Rao, Hao Sui, Xiaobing Sun, Bing Chen, Chunyi Zhou, Shouling Ji}, journal={arXiv preprint arXiv:2410.01272}, year={2024}, archivePrefix={arXiv}, eprint={2410.01272}, primaryClass={cs.CR cs.LG} }
zhang2024"no
arxiv-664363
2410.01273
CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction
<|reference_start|>CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction: Real-life robot navigation involves more than just reaching a destination; it requires optimizing movements while addressing scenario-specific goals. An intuitive way for humans to express these goals is through abstract cues like verbal commands or rough sketches. Such human guidance may lack details or be noisy. Nonetheless, we expect robots to navigate as intended. For robots to interpret and execute these abstract instructions in line with human expectations, they must share a common understanding of basic navigation concepts with humans. To this end, we introduce CANVAS, a novel framework that combines visual and linguistic instructions for commonsense-aware navigation. Its success is driven by imitation learning, enabling the robot to learn from human navigation behavior. We present COMMAND, a comprehensive dataset with human-annotated navigation results, spanning over 48 hours and 219 km, designed to train commonsense-aware navigation systems in simulated environments. Our experiments show that CANVAS outperforms the strong rule-based system ROS NavStack across all environments, demonstrating superior performance with noisy instructions. Notably, in the orchard environment, where ROS NavStack records a 0% total success rate, CANVAS achieves a total success rate of 67%. CANVAS also closely aligns with human demonstrations and commonsense constraints, even in unseen environments. Furthermore, real-world deployment of CANVAS showcases impressive Sim2Real transfer with a total success rate of 69%, highlighting the potential of learning from human demonstrations in simulated environments for real-world applications.<|reference_end|>
arxiv
@article{choi2024canvas:, title={CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction}, author={Suhwan Choi, Yongjun Cho, Minchan Kim, Jaeyoon Jung, Myunchul Joe, Yubeen Park, Minseo Kim, Sungwoong Kim, Sungjae Lee, Hwiseong Park, Jiwan Chung, Youngjae Yu}, journal={arXiv preprint arXiv:2410.01273}, year={2024}, archivePrefix={arXiv}, eprint={2410.01273}, primaryClass={cs.RO cs.CV cs.LG} }
choi2024canvas:
arxiv-664364
2410.01276
Deep Unlearn: Benchmarking Machine Unlearning
<|reference_start|>Deep Unlearn: Benchmarking Machine Unlearning: Machine unlearning (MU) aims to remove the influence of particular data points from the learnable parameters of a trained machine learning model. This is a crucial capability in light of data privacy requirements, trustworthiness, and safety in deployed models. MU is particularly challenging for deep neural networks (DNNs), such as convolutional nets or vision transformers, as such DNNs tend to memorize a notable portion of their training dataset. Nevertheless, the community lacks a rigorous and multifaceted study that looks into the success of MU methods for DNNs. In this paper, we investigate 18 state-of-the-art MU methods across various benchmark datasets and models, with each evaluation conducted over 10 different initializations, a comprehensive evaluation involving MU over 100K models. We show that, with the proper hyperparameters, Masked Small Gradients (MSG) and Convolution Transpose (CT), consistently perform better in terms of model accuracy and run-time efficiency across different models, datasets, and initializations, assessed by population-based membership inference attacks (MIA) and per-sample unlearning likelihood ratio attacks (U-LiRA). Furthermore, our benchmark highlights the fact that comparing a MU method only with commonly used baselines, such as Gradient Ascent (GA) or Successive Random Relabeling (SRL), is inadequate, and we need better baselines like Negative Gradient Plus (NG+) with proper hyperparameter selection.<|reference_end|>
arxiv
@article{cadet2024deep, title={Deep Unlearn: Benchmarking Machine Unlearning}, author={Xavier F. Cadet, Anastasia Borovykh, Mohammad Malekzadeh, Sara Ahmadi-Abhari, Hamed Haddadi}, journal={arXiv preprint arXiv:2410.01276}, year={2024}, archivePrefix={arXiv}, eprint={2410.01276}, primaryClass={cs.LG cs.AI} }
cadet2024deep
arxiv-664365
2410.01277
A Control Barrier Function Candidate for Limited Field of View Sensors
<|reference_start|>A Control Barrier Function Candidate for Limited Field of View Sensors: The problem of control based on vision measurements (bearings) has been amply studied in the literature; however, the problem of addressing the limits of the field of view of physical sensors has received relatively less attention (especially for agents with non-trivial dynamics). The technical challenge is that, as in most vision-based control approaches, a standard approach to the problem requires knowing the distance between cameras and observed features in the scene, which is not directly available. Instead, we present a solution based on a Control Barrier Function (CBF) approach that uses a splitting of the original differential constraint to effectively remove the dependence on the unknown measurement error. Compared to the current literature, our approach gives strong robustness guarantees against bounded distance estimation errors. We showcase the proposed solution with the numerical simulations of a double integrator and a quadrotor tracking a trajectory while keeping the corners of a rectangular gate in the camera field of view.<|reference_end|>
arxiv
@article{trimarchi2024a, title={A Control Barrier Function Candidate for Limited Field of View Sensors}, author={Biagio Trimarchi, Fabrizio Schiano, Roberto Tron}, journal={arXiv preprint arXiv:2410.01277}, year={2024}, archivePrefix={arXiv}, eprint={2410.01277}, primaryClass={eess.SY cs.SY} }
trimarchi2024a
arxiv-664366
2410.01280
Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models
<|reference_start|>Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models: In-context learning, the ability to adapt based on a few examples in the input prompt, is a ubiquitous feature of large language models (LLMs). However, as LLMs' in-context learning abilities continue to improve, understanding this phenomenon mechanistically becomes increasingly important. In particular, it is not well-understood how LLMs learn to solve specific classes of problems, such as reinforcement learning (RL) problems, in-context. Through three different tasks, we first show that Llama $3$ $70$B can solve simple RL problems in-context. We then analyze the residual stream of Llama using Sparse Autoencoders (SAEs) and find representations that closely match temporal difference (TD) errors. Notably, these representations emerge despite the model only being trained to predict the next token. We verify that these representations are indeed causally involved in the computation of TD errors and $Q$-values by performing carefully designed interventions on them. Taken together, our work establishes a methodology for studying and manipulating in-context learning with SAEs, paving the way for a more mechanistic understanding.<|reference_end|>
arxiv
@article{demircan2024sparse, title={Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models}, author={Can Demircan, Tankred Saanum, Akshay K. Jagadish, Marcel Binz, Eric Schulz}, journal={arXiv preprint arXiv:2410.01280}, year={2024}, archivePrefix={arXiv}, eprint={2410.01280}, primaryClass={cs.LG} }
demircan2024sparse
arxiv-664367
2410.01281
Uncertainty-aware Human Mobility Modeling and Anomaly Detection
<|reference_start|>Uncertainty-aware Human Mobility Modeling and Anomaly Detection: Given the GPS coordinates of a large collection of human agents over time, how can we model their mobility behavior toward effective anomaly detection (e.g. for bad-actor or malicious behavior detection) without any labeled data? Human mobility and trajectory modeling have been studied extensively with varying capacity to handle complex input, and performance-efficiency trade-offs. With the arrival of more expressive models in machine learning, we attempt to model GPS data as a sequence of stay-point events, each with a set of characterizing spatiotemporal features, and leverage modern sequence models such as Transformers for un/self-supervised training and inference. Notably, driven by the inherent stochasticity of certain individuals' behavior, we equip our model with aleatoric/data uncertainty estimation. In addition, to handle data sparsity of a large variety of behaviors, we incorporate epistemic/model uncertainty into our model. Together, aleatoric and epistemic uncertainty enable a robust loss and training dynamics, as well as uncertainty-aware decision making in anomaly scoring. Experiments on large expert-simulated datasets with tens of thousands of agents demonstrate the effectiveness of our model against both forecasting and anomaly detection baselines.<|reference_end|>
arxiv
@article{wen2024uncertainty-aware, title={Uncertainty-aware Human Mobility Modeling and Anomaly Detection}, author={Haomin Wen, Shurui Cao, Leman Akoglu}, journal={arXiv preprint arXiv:2410.01281}, year={2024}, archivePrefix={arXiv}, eprint={2410.01281}, primaryClass={cs.AI cs.LG} }
wen2024uncertainty-aware
arxiv-664368
2410.01284
Deep Kernel Posterior Learning under Infinite Variance Prior Weights
<|reference_start|>Deep Kernel Posterior Learning under Infinite Variance Prior Weights: Neal (1996) proved that infinitely wide shallow Bayesian neural networks (BNN) converge to Gaussian processes (GP), when the network weights have bounded prior variance. Cho & Saul (2009) provided a useful recursive formula for deep kernel processes for relating the covariance kernel of each layer to the layer immediately below. Moreover, they worked out the form of the layer-wise covariance kernel in an explicit manner for several common activation functions. Recent works, including Aitchison et al. (2021), have highlighted that the covariance kernels obtained in this manner are deterministic and hence, precludes any possibility of representation learning, which amounts to learning a non-degenerate posterior of a random kernel given the data. To address this, they propose adding artificial noise to the kernel to retain stochasticity, and develop deep kernel inverse Wishart processes. Nonetheless, this artificial noise injection could be critiqued in that it would not naturally emerge in a classic BNN architecture under an infinite-width limit. To address this, we show that a Bayesian deep neural network, where each layer width approaches infinity, and all network weights are elliptically distributed with infinite variance, converges to a process with $\alpha$-stable marginals in each layer that has a conditionally Gaussian representation. These conditional random covariance kernels could be recursively linked in the manner of Cho & Saul (2009), even though marginally the process exhibits stable behavior, and hence covariances are not even necessarily defined. We also provide useful generalizations of the recent results of Lor\'ia & Bhadra (2024) on shallow networks to multi-layer networks, and remedy the computational burden of their approach. The computational and statistical benefits over competing approaches stand out in simulations and in demonstrations on benchmark data sets.<|reference_end|>
arxiv
@article{loría2024deep, title={Deep Kernel Posterior Learning under Infinite Variance Prior Weights}, author={Jorge Lor'ia, Anindya Bhadra}, journal={arXiv preprint arXiv:2410.01284}, year={2024}, archivePrefix={arXiv}, eprint={2410.01284}, primaryClass={stat.ML cs.LG} }
loría2024deep
arxiv-664369
2410.01285
Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration
<|reference_start|>Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration: The black-box nature of large language models (LLMs) poses challenges in interpreting results, impacting issues such as data intellectual property protection and hallucination tracing. Training data attribution (TDA) methods are considered effective solutions to address these challenges. Most recent TDA methods rely on influence functions, assuming the model achieves minimized empirical risk. However, achieving this criterion is difficult, and sourcing accuracy can be compromised by fitting errors during model training. In this paper, we introduce a novel TDA method called Debias and Denoise Attribution (DDA), which enhances influence functions by addressing fitting errors. Specifically, the debias strategy seeks to improve the performance of influence functions by eliminating the knowledge bias present in the base model before fine-tuning, while the denoise strategy aims to reduce discrepancies in influence scores arising from varying degrees of fitting during the training process through smoothing techniques. Experimental results demonstrate that our method significantly outperforms existing approaches, achieving an averaged AUC of 91.64%. Moreover, DDA exhibits strong generality and scalability across various sources and different-scale models like LLaMA2, QWEN2, and Mistral.<|reference_end|>
arxiv
@article{wu2024enhancing, title={Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration}, author={Kangxi Wu, Liang Pang, Huawei Shen, Xueqi Cheng}, journal={arXiv preprint arXiv:2410.01285}, year={2024}, archivePrefix={arXiv}, eprint={2410.01285}, primaryClass={cs.CL} }
wu2024enhancing
arxiv-664370
2410.01288
Mitigating Copy Bias in In-Context Learning through Neuron Pruning
<|reference_start|>Mitigating Copy Bias in In-Context Learning through Neuron Pruning: Large language models (LLMs) have demonstrated impressive few-shot in-context learning (ICL) abilities. Still, we show that they are sometimes prone to a `copying bias', where they copy answers from provided examples instead of learning the underlying patterns. In this work, we propose a novel and simple method to mitigate such copying bias. First, we create a synthetic task and use the Integrated Gradients method to identify neurons that prioritize copying over generalization. We demonstrate that pruning these neurons consistently improves performance across a diverse set of ICL tasks. We also show that our method is applicable across various LLM architectures, including Transformers and State-Space Models, without requiring modifications. In our analysis, we adopt a task-recognition perspective on ICL and examine task vectors (Hendel et al., 2023) induced by the model. We find that pruning enhances the quality of these vectors, suggesting that the pruned neurons previously hindered effective task recognition.<|reference_end|>
arxiv
@article{ali2024mitigating, title={Mitigating Copy Bias in In-Context Learning through Neuron Pruning}, author={Ameen Ali, Lior Wolf and Ivan Titov}, journal={arXiv preprint arXiv:2410.01288}, year={2024}, archivePrefix={arXiv}, eprint={2410.01288}, primaryClass={cs.CL cs.LG} }
ali2024mitigating
arxiv-664371
2410.01289
The Unlikely Hero: Nonideality in Analog Photonic Neural Networks as Built-in Defender Against Adversarial Attacks
<|reference_start|>The Unlikely Hero: Nonideality in Analog Photonic Neural Networks as Built-in Defender Against Adversarial Attacks: Electronic-photonic computing systems have emerged as a promising platform for accelerating deep neural network (DNN) workloads. Major efforts have been focused on countering hardware non-idealities and boosting efficiency with various hardware/algorithm co-design methods. However, the adversarial robustness of such photonic analog mixed-signal AI hardware remains unexplored. Though the hardware variations can be mitigated with robustness-driven optimization methods, malicious attacks on the hardware show distinct behaviors from noises, which requires a customized protection method tailored to optical analog hardware. In this work, we rethink the role of conventionally undesired non-idealities in photonic analog accelerators and claim their surprising effects on defending against adversarial weight attacks. Inspired by the protection effects from DNN quantization and pruning, we propose a synergistic defense framework tailored for optical analog hardware that proactively protects sensitive weights via pre-attack unary weight encoding and post-attack vulnerability-aware weight locking. Efficiency-reliability trade-offs are formulated as constrained optimization problems and efficiently solved offline without model re-training costs. Extensive evaluation of various DNN benchmarks with a multi-core photonic accelerator shows that our framework maintains near-ideal on-chip inference accuracy under adversarial bit-flip attacks with merely <3% memory overhead. Our codes are open-sourced at https://github.com/ScopeX-ASU/Unlikely_Hero.<|reference_end|>
arxiv
@article{lu2024the, title={The Unlikely Hero: Nonideality in Analog Photonic Neural Networks as Built-in Defender Against Adversarial Attacks}, author={Haotian Lu, Ziang Yin, Partho Bhoumik, Sanmitra Banerjee, Krishnendu Chakrabarty, Jiaqi Gu}, journal={arXiv preprint arXiv:2410.01289}, year={2024}, archivePrefix={arXiv}, eprint={2410.01289}, primaryClass={cs.ET cs.CR physics.optics} }
lu2024the
arxiv-664372
2410.01290
Towards a Law of Iterated Expectations for Heuristic Estimators
<|reference_start|>Towards a Law of Iterated Expectations for Heuristic Estimators: Christiano et al. (2022) define a *heuristic estimator* to be a hypothetical algorithm that estimates the values of mathematical expressions from arguments. In brief, a heuristic estimator $\mathbb{G}$ takes as input a mathematical expression $Y$ and a formal "heuristic argument" $\pi$, and outputs an estimate $\mathbb{G}(Y \mid \pi)$ of $Y$. In this work, we argue for the informal principle that a heuristic estimator ought not to be able to predict its own errors, and we explore approaches to formalizing this principle. Most simply, the principle suggests that $\mathbb{G}(Y - \mathbb{G}(Y \mid \pi) \mid \pi)$ ought to equal zero for all $Y$ and $\pi$. We argue that an ideal heuristic estimator ought to satisfy two stronger properties in this vein, which we term *iterated estimation* (by analogy to the law of iterated expectations) and *error orthogonality*. Although iterated estimation and error orthogonality are intuitively appealing, it can be difficult to determine whether a given heuristic estimator satisfies the properties. As an alternative approach, we explore *accuracy*: a property that (roughly) states that $\mathbb{G}$ has zero average error over a distribution of mathematical expressions. However, in the context of two estimation problems, we demonstrate barriers to creating an accurate heuristic estimator. We finish by discussing challenges and potential paths forward for finding a heuristic estimator that accords with our intuitive understanding of how such an estimator ought to behave, as well as the potential applications of heuristic estimators to understanding the behavior of neural networks.<|reference_end|>
arxiv
@article{christiano2024towards, title={Towards a Law of Iterated Expectations for Heuristic Estimators}, author={Paul Christiano, Jacob Hilton, Andrea Lincoln, Eric Neyman, Mark Xu}, journal={arXiv preprint arXiv:2410.01290}, year={2024}, archivePrefix={arXiv}, eprint={2410.01290}, primaryClass={cs.AI cs.LG} }
christiano2024towards
arxiv-664373
2410.01291
What Did I Say Again? Relating User Needs to Search Outcomes in Conversational Commerce
<|reference_start|>What Did I Say Again? Relating User Needs to Search Outcomes in Conversational Commerce: Recent advances in natural language processing and deep learning have accelerated the development of digital assistants. In conversational commerce, these assistants help customers find suitable products in online shops through natural language conversations. During the dialogue, the assistant identifies the customer's needs and preferences and subsequently suggests potentially relevant products. Traditional online shops often allow users to filter search results based on their preferences using facets. Selected facets can also serve as a reminder of how the product base was filtered. In conversational commerce, however, the absence of facets and the use of advanced natural language processing techniques can leave customers uncertain about how their input was processed by the system. This can hinder transparency and trust, which are critical factors influencing customers' purchase intentions. To address this issue, we propose a novel text-based digital assistant that, in the product assessment step, explains how specific product aspects relate to the user's previous utterances to enhance transparency and facilitate informed decision-making. We conducted a user study (N=135) and found a significant increase in user-perceived transparency when natural language explanations and highlighted text passages were provided, demonstrating their potential to extend system transparency to the product assessment step in conversational commerce.<|reference_end|>
arxiv
@article{schott2024what, title={What Did I Say Again? Relating User Needs to Search Outcomes in Conversational Commerce}, author={Kevin Schott, Andrea Papenmeier, Daniel Hienert, Dagmar Kern}, journal={Proceedings of Mensch und Computer 2024 (MuC '24). ACM, New York, NY, USA, 129-139}, year={2024}, doi={10.1145/3670653.3670680}, archivePrefix={arXiv}, eprint={2410.01291}, primaryClass={cs.HC} }
schott2024what
arxiv-664374
2410.01292
Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions
<|reference_start|>Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions: We study how to generalize the visuomotor policy of a mobile manipulator from the perspective of visual observations. The mobile manipulator is prone to occlusion owing to its own body when only a single viewpoint is employed and a significant domain shift when deployed in diverse situations. However, to the best of the authors' knowledge, no study has been able to solve occlusion and domain shift simultaneously and propose a robust policy. In this paper, we propose a robust imitation learning method for mobile manipulators that focuses on task-related viewpoints and their spatial regions when observing multiple viewpoints. The multiple viewpoint policy includes attention mechanism, which is learned with an augmented dataset, and brings optimal viewpoints and robust visual embedding against occlusion and domain shift. Comparison of our results for different tasks and environments with those of previous studies revealed that our proposed method improves the success rate by up to 29.3 points. We also conduct ablation studies using our proposed method. Learning task-related viewpoints from the multiple viewpoints dataset increases robustness to occlusion than using a uniquely defined viewpoint. Focusing on task-related regions contributes to up to a 33.3-point improvement in the success rate against domain shift.<|reference_end|>
arxiv
@article{ishida2024robust, title={Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions}, author={Yutaro Ishida, Yuki Noguchi, Takayuki Kanai, Kazuhiro Shintani and Hiroshi Bito}, journal={arXiv preprint arXiv:2410.01292}, year={2024}, archivePrefix={arXiv}, eprint={2410.01292}, primaryClass={cs.RO} }
ishida2024robust
arxiv-664375
2410.01293
SurgeoNet: Realtime 3D Pose Estimation of Articulated Surgical Instruments from Stereo Images using a Synthetically-trained Network
<|reference_start|>SurgeoNet: Realtime 3D Pose Estimation of Articulated Surgical Instruments from Stereo Images using a Synthetically-trained Network: Surgery monitoring in Mixed Reality (MR) environments has recently received substantial focus due to its importance in image-based decisions, skill assessment, and robot-assisted surgery. Tracking hands and articulated surgical instruments is crucial for the success of these applications. Due to the lack of annotated datasets and the complexity of the task, only a few works have addressed this problem. In this work, we present SurgeoNet, a real-time neural network pipeline to accurately detect and track surgical instruments from a stereo VR view. Our multi-stage approach is inspired by state-of-the-art neural-network architectural design, like YOLO and Transformers. We demonstrate the generalization capabilities of SurgeoNet in challenging real-world scenarios, achieved solely through training on synthetic data. The approach can be easily extended to any new set of articulated surgical instruments. SurgeoNet's code and data are publicly available.<|reference_end|>
arxiv
@article{aboukhadra2024surgeonet:, title={SurgeoNet: Realtime 3D Pose Estimation of Articulated Surgical Instruments from Stereo Images using a Synthetically-trained Network}, author={Ahmed Tawfik Aboukhadra, Nadia Robertini, Jameel Malik, Ahmed Elhayek, Gerd Reis, Didier Stricker}, journal={arXiv preprint arXiv:2410.01293}, year={2024}, archivePrefix={arXiv}, eprint={2410.01293}, primaryClass={cs.CV} }
aboukhadra2024surgeonet:
arxiv-664376
2410.01294
Endless Jailbreaks with Bijection Learning
<|reference_start|>Endless Jailbreaks with Bijection Learning: Despite extensive safety training, LLMs are vulnerable to adversarial inputs. In this work, we introduce a simple but powerful attack paradigm, bijection learning, that yields a practically endless set of jailbreak prompts. We exploit language models' advanced reasoning capabilities to teach them invertible languages (bijections) in context, pass encoded queries to the model to bypass built-in safety mechanisms, and finally decode responses back into English, yielding helpful replies to harmful requests. Our approach proves effective on a wide range of frontier language models and harm categories. Bijection learning is an automated and universal attack that grows stronger with scale: larger models with more advanced reasoning capabilities are more susceptible to bijection learning jailbreaks despite stronger safety mechanisms.<|reference_end|>
arxiv
@article{huang2024endless, title={Endless Jailbreaks with Bijection Learning}, author={Brian R.Y. Huang, Maximilian Li, Leonard Tang}, journal={arXiv preprint arXiv:2410.01294}, year={2024}, archivePrefix={arXiv}, eprint={2410.01294}, primaryClass={cs.CL} }
huang2024endless
arxiv-664377
2410.01295
LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion
<|reference_start|>LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion: This paper introduces a novel hierarchical autoencoder that maps 3D models into a highly compressed latent space. The hierarchical autoencoder is specifically designed to tackle the challenges arising from large-scale datasets and generative modeling using diffusion. Different from previous approaches that only work on a regular image or volume grid, our hierarchical autoencoder operates on unordered sets of vectors. Each level of the autoencoder controls different geometric levels of detail. We show that the model can be used to represent a wide range of 3D models while faithfully representing high-resolution geometry details. The training of the new architecture takes 0.70x time and 0.58x memory compared to the baseline. We also explore how the new representation can be used for generative modeling. Specifically, we propose a cascaded diffusion framework where each stage is conditioned on the previous stage. Our design extends existing cascaded designs for image and volume grids to vector sets.<|reference_end|>
arxiv
@article{zhang2024lagem:, title={LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion}, author={Biao Zhang, Peter Wonka}, journal={arXiv preprint arXiv:2410.01295}, year={2024}, archivePrefix={arXiv}, eprint={2410.01295}, primaryClass={cs.CV cs.GR} }
zhang2024lagem:
arxiv-664378
2410.01296
Speculative Coreset Selection for Task-Specific Fine-tuning
<|reference_start|>Speculative Coreset Selection for Task-Specific Fine-tuning: Task-specific fine-tuning is essential for the deployment of large language models (LLMs), but it requires significant computational resources and time. Existing solutions have proposed coreset selection methods to improve data efficiency and reduce model training overhead, but they still have limitations: 1) Overlooking valuable samples at high pruning rates, which degrades the coreset's performance. 2) Requiring high time overhead during coreset selection to fine-tune and evaluate the target LLM. In this paper, we introduce STAFF, a speculative coreset selection method. STAFF leverages a small model from the same family as the target LLM to efficiently estimate data scores and then verifies the scores on the target LLM to accurately identify and allocate more selection budget to important regions while maintaining coverage of easy regions. We evaluate STAFF on three LLMs and three downstream tasks and show that STAFF improves the performance of SOTA methods by up to 54.3% and reduces selection overhead by up to 70.5% at different pruning rates. Furthermore, we observe that the coreset selected by STAFF at low pruning rates (i.e., 20%) can even obtain better fine-tuning performance than the full dataset.<|reference_end|>
arxiv
@article{zhang2024speculative, title={Speculative Coreset Selection for Task-Specific Fine-tuning}, author={Xiaoyu Zhang, Juan Zhai, Shiqing Ma, Chao Shen, Tianlin Li, Weipeng Jiang, Yang Liu}, journal={arXiv preprint arXiv:2410.01296}, year={2024}, archivePrefix={arXiv}, eprint={2410.01296}, primaryClass={cs.LG cs.AI} }
zhang2024speculative
arxiv-664379
2410.01298
Building a real-time physical layer labeled data logging facility for 6G research
<|reference_start|>Building a real-time physical layer labeled data logging facility for 6G research: This work describes the architecture and vision of designing and implementing a new test infrastructure for 6G physical layer research at KU Leuven. The Testbed is designed for physical layer research and experimentation following several emerging trends, such as cell-free networking, integrated communication, sensing, open disaggregated Radio Access Networks, AI-Native design, and multiband operation. The software is almost entirely based on free and open-source software, making contributing and reusing any component easy. The open Testbed is designed to provide real-time and labeled data on all parts of the physical layer, from raw IQ data to synchronization statistics, channel state information, or symbol/bit/packet error rates. Real-time labeled datasets can be collected by synchronizing the physical layer data logging with a positioning and motion capture system. One of the main goals of the design is to make it open and accessible to external users remotely. Most tests and data captures can easily be automated, and experiment code can be remotely deployed using standard containers (e.g., Docker or Podman). Finally, the paper describes how the Testbed can be used for our research on joint communication and sensing, over-the-air synchronization, distributed processing, and AI in the loop.<|reference_end|>
arxiv
@article{minucci2024building, title={Building a real-time physical layer labeled data logging facility for 6G research}, author={Franco Minucci, Raquel Marina Noguera Oishi, Haoqiu Xiong, Dieter Verbruggen, Cel Thys, Rizqi Hersyandika, Robbert Beerten, Achiel Colpaert, Vida Ranjbar, Sofie Pollin}, journal={arXiv preprint arXiv:2410.01298}, year={2024}, archivePrefix={arXiv}, eprint={2410.01298}, primaryClass={cs.NI eess.SP} }
minucci2024building
arxiv-664380
2410.01299
Single versus Multi-Tone Wireless Power Transfer with Physically Large Array
<|reference_start|>Single versus Multi-Tone Wireless Power Transfer with Physically Large Array: Distributed beamforming is a key enabler to provide power wirelessly to a massive amount of energy-neutral devices (ENDs). However, without prior information and fully depleted ENDs, initially powering these devices efficiently is an open question. This work investigates and assesses the feasibility of harvesting sufficient energy to transmit a backscatter pilot signal from the END, which can be then used for coherent downlink transmission. We experimentally evaluated adaptive single-tone and multi-tone signals during initial charging. The results indicate that the response time for ENDs with unknown locations can extend to several tens of seconds. Notably, the adaptive single-tone excitation shows, among others, better performance at lower transmit power levels, providing a faster response. These findings underscore the potential of adaptive single-tone signals in optimizing power delivery to END in future networks.<|reference_end|>
arxiv
@article{van mulders2024single, title={Single versus Multi-Tone Wireless Power Transfer with Physically Large Array}, author={Jarne Van Mulders, Benjamin J. B. Deutschmann, Geoffrey Ottoy, Lieven De Strycker, Liesbet Van der Perre, Thomas Wilding and Gilles Callebaut}, journal={arXiv preprint arXiv:2410.01299}, year={2024}, archivePrefix={arXiv}, eprint={2410.01299}, primaryClass={eess.SY cs.SY} }
van mulders2024single
arxiv-664381
2410.01303
Decentralized Expectation Propagation for Semi-Blind Channel Estimation in Cell-Free Networks
<|reference_start|>Decentralized Expectation Propagation for Semi-Blind Channel Estimation in Cell-Free Networks: This paper serves as a correction to the conference version. In this work, we explore uplink communication in cell-free (CF) massive multiple-input multiple-output (MaMIMO) systems, employing semi-blind transmission structures to mitigate pilot contamination. We propose a simplified, decentralized method based on Expectation Propagation (EP) for semi-blind channel estimation. By utilizing orthogonal pilots, we preprocess the received signals to establish a simplified equivalent factorization scheme for the transmission process. Moreover, this study integrates Central Limit Theory (CLT) with EP, eliminating the need to introduce new auxiliary variables in the factorization scheme. We also refine the algorithm by assessing the variable scales involved. Finally, a decentralized approach is proposed to significantly reduce the computational demands on the Central Processing Unit (CPU).<|reference_end|>
arxiv
@article{zhao2024decentralized, title={Decentralized Expectation Propagation for Semi-Blind Channel Estimation in Cell-Free Networks}, author={Zilu Zhao, Dirk Slock}, journal={arXiv preprint arXiv:2410.01303}, year={2024}, archivePrefix={arXiv}, eprint={2410.01303}, primaryClass={cs.IT eess.SP math.IT} }
zhao2024decentralized
arxiv-664382
2410.01304
Deep learning for action spotting in association football videos
<|reference_start|>Deep learning for action spotting in association football videos: The task of action spotting consists in both identifying actions and precisely localizing them in time with a single timestamp in long, untrimmed video streams. Automatically extracting those actions is crucial for many sports applications, including sports analytics to produce extended statistics on game actions, coaching to provide support to video analysts, or fan engagement to automatically overlay content in the broadcast when specific actions occur. However, before 2018, no large-scale datasets for action spotting in sports were publicly available, which impeded benchmarking action spotting methods. In response, our team built the largest dataset and the most comprehensive benchmarks for sports video understanding, under the umbrella of SoccerNet. Particularly, our dataset contains a subset specifically dedicated to action spotting, called SoccerNet Action Spotting, containing more than 550 complete broadcast games annotated with almost all types of actions that can occur in a football game. This dataset is tailored to develop methods for automatic spotting of actions of interest, including deep learning approaches, by providing a large amount of manually annotated actions. To engage with the scientific community, the SoccerNet initiative organizes yearly challenges, during which participants from all around the world compete to achieve state-of-the-art performances. Thanks to our dataset and challenges, more than 60 methods were developed or published over the past five years, improving on the first baselines and making action spotting a viable option for the sports industry. This paper traces the history of action spotting in sports, from the creation of the task back in 2018, to the role it plays today in research and the sports industry.<|reference_end|>
arxiv
@article{giancola2024deep, title={Deep learning for action spotting in association football videos}, author={Silvio Giancola, Anthony Cioppa, Bernard Ghanem, and Marc Van Droogenbroeck}, journal={arXiv preprint arXiv:2410.01304}, year={2024}, archivePrefix={arXiv}, eprint={2410.01304}, primaryClass={cs.CV} }
giancola2024deep
arxiv-664383
2410.01305
Revisiting Hierarchical Text Classification: Inference and Metrics
<|reference_start|>Revisiting Hierarchical Text Classification: Inference and Metrics: Hierarchical text classification (HTC) is the task of assigning labels to a text within a structured space organized as a hierarchy. Recent works treat HTC as a conventional multilabel classification problem, therefore evaluating it as such. We instead propose to evaluate models based on specifically designed hierarchical metrics and we demonstrate the intricacy of metric choice and prediction inference method. We introduce a new challenging dataset and we evaluate fairly, recent sophisticated models, comparing them with a range of simple but strong baselines, including a new theoretically motivated loss. Finally, we show that those baselines are very often competitive with the latest models. This highlights the importance of carefully considering the evaluation methodology when proposing new methods for HTC. Code implementation and dataset are available at \url{https://github.com/RomanPlaud/revisitingHTC}.<|reference_end|>
arxiv
@article{plaud2024revisiting, title={Revisiting Hierarchical Text Classification: Inference and Metrics}, author={Roman Plaud, Matthieu Labeau, Antoine Saillenfest, Thomas Bonald}, journal={arXiv preprint arXiv:2410.01305}, year={2024}, archivePrefix={arXiv}, eprint={2410.01305}, primaryClass={cs.CL cs.LG} }
plaud2024revisiting
arxiv-664384
2410.01306
Emotion-Aware Response Generation Using Affect-Enriched Embeddings with LLMs
<|reference_start|>Emotion-Aware Response Generation Using Affect-Enriched Embeddings with LLMs: There is a need for empathetic and coherent responses in automated chatbot-facilitated psychotherapy sessions. This study addresses the challenge of enhancing the emotional and contextual understanding of large language models (LLMs) in psychiatric applications. We introduce a novel framework that integrates multiple emotion lexicons, including NRC Emotion Lexicon, VADER, WordNet, and SentiWordNet, with state-of-the-art LLMs such as LLAMA 2, Flan-T5, ChatGPT 3.0, and ChatGPT 4.0. The primary dataset comprises over 2,000 therapy session transcripts from the Counseling and Psychotherapy database, covering discussions on anxiety, depression, trauma, and addiction. We segment the transcripts into smaller chunks, enhancing them with lexical features and computing embeddings using BERT, GPT-3, and RoBERTa to capture semantic and emotional nuances. These embeddings are stored in a FAISS vector database, enabling efficient similarity search and clustering based on cosine similarity. Upon user query, the most relevant segments are retrieved and provided as context to the LLMs, significantly improving the models' ability to generate empathetic and contextually appropriate responses. Experimental evaluations demonstrate that in-corporating emotion lexicons enhances empathy, coherence, informativeness, and fluency scores. Our findings highlight the critical role of emotional embeddings in improving LLM performance for psychotherapy.<|reference_end|>
arxiv
@article{rasool2024emotion-aware, title={Emotion-Aware Response Generation Using Affect-Enriched Embeddings with LLMs}, author={Abdur Rasool, Muhammad Irfan Shahzad, Hafsa Aslam, Vincent Chan}, journal={arXiv preprint arXiv:2410.01306}, year={2024}, archivePrefix={arXiv}, eprint={2410.01306}, primaryClass={cs.CL cs.AI cs.CY} }
rasool2024emotion-aware
arxiv-664385
2410.01307
FanCric : Multi-Agentic Framework for Crafting Fantasy 11 Cricket Teams
<|reference_start|>FanCric : Multi-Agentic Framework for Crafting Fantasy 11 Cricket Teams: Cricket, with its intricate strategies and deep history, increasingly captivates a global audience. The Indian Premier League (IPL), epitomizing Twenty20 cricket, showcases talent in a format that lasts just a few hours as opposed to the longer forms of the game. Renowned for its fusion of technology and fan engagement, the IPL stands as the world's most popular cricket league. This study concentrates on Dream11, India's leading fantasy cricket league for IPL, where participants craft virtual teams based on real player performances to compete internationally. Building a winning fantasy team requires navigating various complex factors including player form and match conditions. Traditionally, this has been approached through operations research and machine learning. This research introduces the FanCric framework, an advanced multi-agent system leveraging Large Language Models (LLMs) and a robust orchestration framework to enhance fantasy team selection in cricket. FanCric employs both structured and unstructured data to surpass traditional methods by incorporating sophisticated AI technologies. The analysis involved scrutinizing approximately 12.7 million unique entries from a Dream11 contest, evaluating FanCric's efficacy against the collective wisdom of crowds and a simpler Prompt Engineering approach. Ablation studies further assessed the impact of generating varying numbers of teams. The exploratory findings are promising, indicating that further investigation into FanCric's capabilities is warranted to fully realize its potential in enhancing strategic decision-making using LLMs in fantasy sports and business in general.<|reference_end|>
arxiv
@article{bhatnagar2024fancric, title={FanCric : Multi-Agentic Framework for Crafting Fantasy 11 Cricket Teams}, author={Mohit Bhatnagar}, journal={arXiv preprint arXiv:2410.01307}, year={2024}, archivePrefix={arXiv}, eprint={2410.01307}, primaryClass={cs.AI} }
bhatnagar2024fancric
arxiv-664386
2410.01308
Rethinking the Expressiveness of GNNs: A Computational Model Perspective
<|reference_start|>Rethinking the Expressiveness of GNNs: A Computational Model Perspective: Graph Neural Networks (GNNs) are extensively employed in graph machine learning, with considerable research focusing on their expressiveness. Current studies often assess GNN expressiveness by comparing them to the Weisfeiler-Lehman (WL) tests or classical graph algorithms. However, we identify three key issues in existing analyses: (1) some studies use preprocessing to enhance expressiveness but overlook its computational costs; (2) some claim the anonymous WL test's limited power while enhancing expressiveness using non-anonymous features, creating a mismatch; and (3) some characterize message-passing GNNs (MPGNNs) with the CONGEST model but make unrealistic assumptions about computational resources, allowing $\textsf{NP-Complete}$ problems to be solved in $O(m)$ depth. We contend that a well-defined computational model is urgently needed to serve as the foundation for discussions on GNN expressiveness. To address these issues, we introduce the Resource-Limited CONGEST (RL-CONGEST) model, incorporating optional preprocessing and postprocessing to form a framework for analyzing GNN expressiveness. Our framework sheds light on computational aspects, including the computational hardness of hash functions in the WL test and the role of virtual nodes in reducing network capacity. Additionally, we suggest that high-order GNNs correspond to first-order model-checking problems, offering new insights into their expressiveness.<|reference_end|>
arxiv
@article{cui2024rethinking, title={Rethinking the Expressiveness of GNNs: A Computational Model Perspective}, author={Guanyu Cui, Zhewei Wei, Hsin-Hao Su}, journal={arXiv preprint arXiv:2410.01308}, year={2024}, archivePrefix={arXiv}, eprint={2410.01308}, primaryClass={cs.LG cs.AI} }
cui2024rethinking
arxiv-664387
2410.01309
Getting Free Bits Back from Rotational Symmetries in LLMs
<|reference_start|>Getting Free Bits Back from Rotational Symmetries in LLMs: Current methods for compressing neural network weights, such as decomposition, pruning, quantization, and channel simulation, often overlook the inherent symmetries within these networks and thus waste bits on encoding redundant information. In this paper, we propose a format based on bits-back coding for storing rotationally symmetric Transformer weights more efficiently than the usual array layout at the same floating-point precision. We evaluate our method on Large Language Models (LLMs) pruned by SliceGPT (Ashkboos et al., 2024) and achieve a 3-5% reduction in total bit usage for free across different model sizes and architectures without impacting model performance within a certain numerical precision.<|reference_end|>
arxiv
@article{he2024getting, title={Getting Free Bits Back from Rotational Symmetries in LLMs}, author={Jiajun He, Gergely Flamich, Jos'e Miguel Hern'andez-Lobato}, journal={arXiv preprint arXiv:2410.01309}, year={2024}, archivePrefix={arXiv}, eprint={2410.01309}, primaryClass={cs.IT cs.LG math.IT} }
he2024getting
arxiv-664388
2410.01312
Sampling from Energy-based Policies using Diffusion
<|reference_start|>Sampling from Energy-based Policies using Diffusion: Energy-based policies offer a flexible framework for modeling complex, multimodal behaviors in reinforcement learning (RL). In maximum entropy RL, the optimal policy is a Boltzmann distribution derived from the soft Q-function, but direct sampling from this distribution in continuous action spaces is computationally intractable. As a result, existing methods typically use simpler parametric distributions, like Gaussians, for policy representation - limiting their ability to capture the full complexity of multimodal action distributions. In this paper, we introduce a diffusion-based approach for sampling from energy-based policies, where the negative Q-function defines the energy function. Based on this approach, we propose an actor-critic method called Diffusion Q-Sampling (DQS) that enables more expressive policy representations, allowing stable learning in diverse environments. We show that our approach enhances exploration and captures multimodal behavior in continuous control tasks, addressing key limitations of existing methods.<|reference_end|>
arxiv
@article{jain2024sampling, title={Sampling from Energy-based Policies using Diffusion}, author={Vineet Jain, Tara Akhound-Sadegh, Siamak Ravanbakhsh}, journal={arXiv preprint arXiv:2410.01312}, year={2024}, archivePrefix={arXiv}, eprint={2410.01312}, primaryClass={cs.LG} }
jain2024sampling
arxiv-664389
2410.01313
ADEPT-Z: Zero-Shot Automated Circuit Topology Search for Pareto-Optimal Photonic Tensor Cores
<|reference_start|>ADEPT-Z: Zero-Shot Automated Circuit Topology Search for Pareto-Optimal Photonic Tensor Cores: Photonic tensor cores (PTCs) are essential building blocks for optical artificial intelligence (AI) accelerators based on programmable photonic integrated circuits. Most PTC designs today are manually constructed, with low design efficiency and unsatisfying solution quality. This makes it challenging to meet various hardware specifications and keep up with rapidly evolving AI applications. Prior work has explored gradient-based methods to learn a good PTC structure differentiably. However, it suffers from slow training speed and optimization difficulty when handling multiple non-differentiable objectives and constraints. Therefore, in this work, we propose a more flexible and efficient zero-shot multi-objective evolutionary topology search framework ADEPT-Z that explores Pareto-optimal PTC designs with advanced devices in a larger search space. Multiple objectives can be co-optimized while honoring complicated hardware constraints. With only <3 hours of search, we can obtain tens of diverse Pareto-optimal solutions, 100x faster than the prior gradient-based method, outperforming prior manual designs with 2x higher accuracy weighted area-energy efficiency. The code of ADEPT-Z is available at https://github.com/ScopeX-ASU/ADEPT-Z.<|reference_end|>
arxiv
@article{jiang2024adept-z:, title={ADEPT-Z: Zero-Shot Automated Circuit Topology Search for Pareto-Optimal Photonic Tensor Cores}, author={Ziyang Jiang, Pingchuan Ma, Meng Zhang, Rena Huang, Jiaqi Gu}, journal={arXiv preprint arXiv:2410.01313}, year={2024}, archivePrefix={arXiv}, eprint={2410.01313}, primaryClass={cs.ET cs.NE physics.optics} }
jiang2024adept-z:
arxiv-664390
2410.01316
Fast Summation of Radial Kernels via QMC Slicing
<|reference_start|>Fast Summation of Radial Kernels via QMC Slicing: The fast computation of large kernel sums is a challenging task, which arises as a subproblem in any kernel method. We approach the problem by slicing, which relies on random projections to one-dimensional subspaces and fast Fourier summation. We prove bounds for the slicing error and propose a quasi-Monte Carlo (QMC) approach for selecting the projections based on spherical quadrature rules. Numerical examples demonstrate that our QMC-slicing approach significantly outperforms existing methods like (QMC-)random Fourier features, orthogonal Fourier features or non-QMC slicing on standard test datasets.<|reference_end|>
arxiv
@article{hertrich2024fast, title={Fast Summation of Radial Kernels via QMC Slicing}, author={Johannes Hertrich, Tim Jahn, Michael Quellmalz}, journal={arXiv preprint arXiv:2410.01316}, year={2024}, archivePrefix={arXiv}, eprint={2410.01316}, primaryClass={math.NA cs.LG cs.NA stat.CO stat.ML} }
hertrich2024fast
arxiv-664391
2410.01319
Finetuning Pre-trained Model with Limited Data for LiDAR-based 3D Object Detection by Bridging Domain Gaps
<|reference_start|>Finetuning Pre-trained Model with Limited Data for LiDAR-based 3D Object Detection by Bridging Domain Gaps: LiDAR-based 3D object detectors have been largely utilized in various applications, including autonomous vehicles or mobile robots. However, LiDAR-based detectors often fail to adapt well to target domains with different sensor configurations (e.g., types of sensors, spatial resolution, or FOVs) and location shifts. Collecting and annotating datasets in a new setup is commonly required to reduce such gaps, but it is often expensive and time-consuming. Recent studies suggest that pre-trained backbones can be learned in a self-supervised manner with large-scale unlabeled LiDAR frames. However, despite their expressive representations, they remain challenging to generalize well without substantial amounts of data from the target domain. Thus, we propose a novel method, called Domain Adaptive Distill-Tuning (DADT), to adapt a pre-trained model with limited target data (approximately 100 LiDAR frames), retaining its representation power and preventing it from overfitting. Specifically, we use regularizers to align object-level and context-level representations between the pre-trained and finetuned models in a teacher-student architecture. Our experiments with driving benchmarks, i.e., Waymo Open dataset and KITTI, confirm that our method effectively finetunes a pre-trained model, achieving significant gains in accuracy.<|reference_end|>
arxiv
@article{jang2024finetuning, title={Finetuning Pre-trained Model with Limited Data for LiDAR-based 3D Object Detection by Bridging Domain Gaps}, author={Jiyun Jang, Mincheol Chang, Jongwon Park, Jinkyu Kim}, journal={arXiv preprint arXiv:2410.01319}, year={2024}, archivePrefix={arXiv}, eprint={2410.01319}, primaryClass={cs.CV cs.AI cs.RO} }
jang2024finetuning
arxiv-664392
2410.01320
Detecting Viral Social Events through Censored Observation with Deep Survival Analysis
<|reference_start|>Detecting Viral Social Events through Censored Observation with Deep Survival Analysis: Users increasing activity across various social networks made it the most widely used platform for exchanging and propagating information among individuals. To spread information within a network, a user initially shared information on a social network, and then other users in direct contact with him might have shared that information. Information expanded throughout the network by repeatedly following this process. A set of information that became popular and was repeatedly shared by different individuals was called viral events. Identifying and analyzing viral social events led to valuable insights into the dynamics of information dissemination within a network. However, more importantly, proactive approaches emerged. In other words, by observing the dissemination pattern of a piece of information in the early stages of expansion, it became possible to determine whether this cascade would become viral in the future. This research aimed to predict and detect viral events in social networks by observing granular information and using a deep survival analysis-based method. This model could play a significant role in identifying rumors, predicting the impact of information, and assisting in optimal decision-making in information management and marketing. Ultimately, the proposed method was tested on various real-world datasets from Twitter, Weibo, and Digg.<|reference_end|>
arxiv
@article{ramezani2024detecting, title={Detecting Viral Social Events through Censored Observation with Deep Survival Analysis}, author={Maryam Ramezani, Hossein Goli, AmirMohammad Izad, Hamid R. Rabiee}, journal={arXiv preprint arXiv:2410.01320}, year={2024}, archivePrefix={arXiv}, eprint={2410.01320}, primaryClass={cs.SI} }
ramezani2024detecting
arxiv-664393
2410.01322
Forte : Finding Outliers with Representation Typicality Estimation
<|reference_start|>Forte : Finding Outliers with Representation Typicality Estimation: Generative models can now produce photorealistic synthetic data which is virtually indistinguishable from the real data used to train it. This is a significant evolution over previous models which could produce reasonable facsimiles of the training data, but ones which could be visually distinguished from the training data by human evaluation. Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors due to issues involving likelihood misestimation, entropy in the generative process, and typicality. We speculate that generative OOD detectors also failed because their models focused on the pixels rather than the semantic content of the data, leading to failures in near-OOD cases where the pixels may be similar but the information content is significantly different. We hypothesize that estimating typical sets using self-supervised learners leads to better OOD detectors. We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation, to address all of the aforementioned issues. Our method outperforms other unsupervised approaches and achieves state-of-the art performance on well-established challenging benchmarks, and new synthetic data detection tasks.<|reference_end|>
arxiv
@article{ganguly2024forte, title={Forte : Finding Outliers with Representation Typicality Estimation}, author={Debargha Ganguly, Warren Morningstar, Andrew Yu and Vipin Chaudhary}, journal={arXiv preprint arXiv:2410.01322}, year={2024}, archivePrefix={arXiv}, eprint={2410.01322}, primaryClass={cs.LG cs.AI cs.CV cs.IT math.IT} }
ganguly2024forte
arxiv-664394
2410.01324
Fair Class-Incremental Learning using Sample Weighting
<|reference_start|>Fair Class-Incremental Learning using Sample Weighting: Model fairness is becoming important in class-incremental learning for Trustworthy AI. While accuracy has been a central focus in class-incremental learning, fairness has been relatively understudied. However, naively using all the samples of the current task for training results in unfair catastrophic forgetting for certain sensitive groups including classes. We theoretically analyze that forgetting occurs if the average gradient vector of the current task data is in an "opposite direction" compared to the average gradient vector of a sensitive group, which means their inner products are negative. We then propose a fair class-incremental learning framework that adjusts the training weights of current task samples to change the direction of the average gradient vector and thus reduce the forgetting of underperforming groups and achieve fairness. For various group fairness measures, we formulate optimization problems to minimize the overall losses of sensitive groups while minimizing the disparities among them. We also show the problems can be solved with linear programming and propose an efficient Fairness-aware Sample Weighting (FSW) algorithm. Experiments show that FSW achieves better accuracy-fairness tradeoff results than state-of-the-art approaches on real datasets.<|reference_end|>
arxiv
@article{park2024fair, title={Fair Class-Incremental Learning using Sample Weighting}, author={Jaeyoung Park, Minsu Kim, Steven Euijong Whang}, journal={arXiv preprint arXiv:2410.01324}, year={2024}, archivePrefix={arXiv}, eprint={2410.01324}, primaryClass={cs.LG cs.AI} }
park2024fair
arxiv-664395
2410.01325
ReFeree: Radar-Based Lightweight and Robust Localization using Feature and Free space
<|reference_start|>ReFeree: Radar-Based Lightweight and Robust Localization using Feature and Free space: Place recognition plays an important role in achieving robust long-term autonomy. Real-world robots face a wide range of weather conditions (e.g. overcast, heavy rain, and snowing) and most sensors (i.e. camera, LiDAR) essentially functioning within or near-visible electromagnetic waves are sensitive to adverse weather conditions, making reliable localization difficult. In contrast, radar is gaining traction due to long electromagnetic waves, which are less affected by environmental changes and weather independence. In this work, we propose a radar-based lightweight and robust place recognition. We achieve rotational invariance and lightweight by selecting a one-dimensional ring-shaped description and robustness by mitigating the impact of false detection utilizing opposite noise characteristics between free space and feature. In addition, the initial heading can be estimated, which can assist in building a SLAM pipeline that combines odometry and registration, which takes into account onboard computing. The proposed method was tested for rigorous validation across various scenarios (i.e. single session, multi-session, and different weather conditions). In particular, we validate our descriptor achieving reliable place recognition performance through the results of extreme environments that lacked structural information such as an OORD dataset.<|reference_end|>
arxiv
@article{kim2024referee:, title={ReFeree: Radar-Based Lightweight and Robust Localization using Feature and Free space}, author={Hogyun Kim, Byunghee Choi, Euncheol Choi, and Younggun Cho}, journal={arXiv preprint arXiv:2410.01325}, year={2024}, doi={10.1109/LRA.2024.3474554}, archivePrefix={arXiv}, eprint={2410.01325}, primaryClass={cs.RO} }
kim2024referee:
arxiv-664396
2410.01330
Enhancing User Fairness in Wireless Powered Communication Networks with STAR-RIS
<|reference_start|>Enhancing User Fairness in Wireless Powered Communication Networks with STAR-RIS: A simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) assisted wireless powered communication network (WPCN) is proposed, where two energy-limited devices first harvest energy from a hybrid access point (HAP) and then use that energy to transmit information back. To fully eliminate the doubly-near-far effect in WPCNs, two STAR-RIS operating protocol-driven transmission strategies, namely energy splitting non-orthogonal multiple access (ES- NOMA) and time switching time division multiple access (TS- TDMA) are proposed. For each strategy, the corresponding optimization problem is formulated to maximize the minimum throughput by jointly optimizing time allocation, user transmit power, active HAP beamforming, and passive STAR-RIS beamforming. For ES-NOMA, the resulting intractable problem is solved via a two-layer algorithm, which exploits the one-dimensional search and block coordinate descent methods in an iterative manner. For TS-TDMA, the optimal active beamforming and passive beamforming are first determined according to the maximum-ratio transmission beamformer. Then, the optimal solution of the time allocation variables is obtained by solving a standard convex problem. Numerical results show that: 1) the STAR-RIS can achieve considerable performance improvements for both strategies compared to the conventional RIS; 2) TS- TDMA is preferred for single-antenna scenarios, whereas ES- NOMA is better suited for multi-antenna scenarios; and 3) the superiority of ES-NOMA over TS-TDMA is enhanced as the number of STAR-RIS elements increases.<|reference_end|>
arxiv
@article{zhu2024enhancing, title={Enhancing User Fairness in Wireless Powered Communication Networks with STAR-RIS}, author={Guangyu Zhu, Xidong Mu, Li Guo, Ao Huang, Shibiao Xu}, journal={arXiv preprint arXiv:2410.01330}, year={2024}, archivePrefix={arXiv}, eprint={2410.01330}, primaryClass={cs.IT eess.SP math.IT} }
zhu2024enhancing
arxiv-664397
2410.01331
Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting
<|reference_start|>Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting: Dealing with Partially Observable Markov Decision Processes is notably a challenging task. We face an average-reward infinite-horizon POMDP setting with an unknown transition model, where we assume the knowledge of the observation model. Under this assumption, we propose the Observation-Aware Spectral (OAS) estimation technique, which enables the POMDP parameters to be learned from samples collected using a belief-based policy. Then, we propose the OAS-UCRL algorithm that implicitly balances the exploration-exploitation trade-off following the $\textit{optimism in the face of uncertainty}$ principle. The algorithm runs through episodes of increasing length. For each episode, the optimal belief-based policy of the estimated POMDP interacts with the environment and collects samples that will be used in the next episode by the OAS estimation procedure to compute a new estimate of the POMDP parameters. Given the estimated model, an optimization oracle computes the new optimal policy. We show the consistency of the OAS procedure, and we prove a regret guarantee of order $\mathcal{O}(\sqrt{T \log(T)})$ for the proposed OAS-UCRL algorithm. We compare against the oracle playing the optimal stochastic belief-based policy and show the efficient scaling of our approach with respect to the dimensionality of the state, action, and observation space. We finally conduct numerical simulations to validate and compare the proposed technique with other baseline approaches.<|reference_end|>
arxiv
@article{russo2024efficient, title={Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting}, author={Alessio Russo, Alberto Maria Metelli, Marcello Restelli}, journal={arXiv preprint arXiv:2410.01331}, year={2024}, archivePrefix={arXiv}, eprint={2410.01331}, primaryClass={cs.LG stat.ML} }
russo2024efficient
arxiv-664398
2410.01334
Unveiling Language Skills under Circuits
<|reference_start|>Unveiling Language Skills under Circuits: The exploration of language skills in language models (LMs) has always been one of the central goals in mechanistic interpretability. However, existing circuit analyses often fall short in representing the full functional scope of these models, primarily due to the exclusion of Feed-Forward layers. Additionally, isolating the effect of a single language skill from a text, which inherently involves multiple entangled skills, poses a significant challenge. To address these gaps, we introduce a novel concept, Memory Circuit, a minimum unit that fully and independently manipulates the memory-reading functionality of a language model, and disentangle the transformer model precisely into a circuit graph which is an ensemble of paths connecting different memory circuits. Based on this disentanglement, we identify salient circuit paths, named as skill paths, responsible for three crucial language skills, i.e., the Previous Token Skill, Induction Skill and In-Context Learning (ICL) Skill, leveraging causal effect estimation through interventions and counterfactuals. Our experiments on various datasets confirm the correspondence between our identified skill paths and language skills, and validate three longstanding hypotheses: 1) Language skills are identifiable through circuit dissection; 2) Simple language skills reside in shallow layers, whereas complex language skills are found in deeper layers; 3) Complex language skills are formed on top of simpler language skills. Our codes are available at: https://github.com/Zodiark-ch/Language-Skill-of-LLMs.<|reference_end|>
arxiv
@article{chen2024unveiling, title={Unveiling Language Skills under Circuits}, author={Hang Chen and Jiaying Zhu and Xinyu Yang and Wenya Wang}, journal={arXiv preprint arXiv:2410.01334}, year={2024}, archivePrefix={arXiv}, eprint={2410.01334}, primaryClass={cs.CL cs.AI} }
chen2024unveiling
arxiv-664399
2410.01335
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models
<|reference_start|>Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models: Model merging, such as model souping, is the practice of combining different models with the same architecture together without further training. In this work, we present a model merging methodology that addresses the difficulty of fine-tuning Large Language Models (LLMs) for target tasks in non-English languages, where task-specific data is often unavailable. We focus on mathematical reasoning and without in-language math data, facilitate cross-lingual transfer by composing language and math capabilities. Starting from the same pretrained model, we fine-tune separate "experts" on math instruction data in English and on generic instruction data in the target language. We then replace the top and bottom transformer layers of the math expert directly with layers from the language expert, which consequently enhances math performance in the target language. The resulting merged models outperform the individual experts and other merging methods on the math benchmark, MGSM, by 10% across four major languages where math instruction data is scarce. In addition, this layer swapping is simple, inexpensive, and intuitive, as it is based on an interpretative analysis of the most important parameter changes during the fine-tuning of each expert. The ability to successfully re-compose LLMs for cross-lingual transfer in this manner opens up future possibilities to combine model expertise, create modular solutions, and transfer reasoning capabilities across languages all post hoc.<|reference_end|>
arxiv
@article{bandarkar2024layer, title={Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models}, author={Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj, Rui Hou, Nayan Singhal, Hongjiang Lv, Bing Liu}, journal={arXiv preprint arXiv:2410.01335}, year={2024}, archivePrefix={arXiv}, eprint={2410.01335}, primaryClass={cs.CL cs.AI cs.LG} }
bandarkar2024layer
arxiv-664400
2410.01336
VectorGraphNET: Graph Attention Networks for Accurate Segmentation of Complex Technical Drawings
<|reference_start|>VectorGraphNET: Graph Attention Networks for Accurate Segmentation of Complex Technical Drawings: This paper introduces a new approach to extract and analyze vector data from technical drawings in PDF format. Our method involves converting PDF files into SVG format and creating a feature-rich graph representation, which captures the relationships between vector entities using geometrical information. We then apply a graph attention transformer with hierarchical label definition to achieve accurate line-level segmentation. Our approach is evaluated on two datasets, including the public FloorplanCAD dataset, which achieves state-of-the-art results on weighted F1 score, surpassing existing methods. The proposed vector-based method offers a more scalable solution for large-scale technical drawing analysis compared to vision-based approaches, while also requiring significantly less GPU power than current state-of-the-art vector-based techniques. Moreover, it demonstrates improved performance in terms of the weighted F1 (wF1) score on the semantic segmentation task. Our results demonstrate the effectiveness of our approach in extracting meaningful information from technical drawings, enabling new applications, and improving existing workflows in the AEC industry. Potential applications of our approach include automated building information modeling (BIM) and construction planning, which could significantly impact the efficiency and productivity of the industry.<|reference_end|>
arxiv
@article{carrara2024vectorgraphnet:, title={VectorGraphNET: Graph Attention Networks for Accurate Segmentation of Complex Technical Drawings}, author={Andrea Carrara and Stavros Nousias and Andr'e Borrmann}, journal={arXiv preprint arXiv:2410.01336}, year={2024}, archivePrefix={arXiv}, eprint={2410.01336}, primaryClass={cs.CV} }
carrara2024vectorgraphnet: