corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-664801 | 2410.02052 | Improving Autonomous AI Agents with Reflective Tree Search and Self-Learning | <|reference_start|>Improving Autonomous AI Agents with Reflective Tree Search and Self-Learning: Autonomous agents have demonstrated significant potential in automating complex multistep decision-making tasks. However, even state-of-the-art vision-language models (VLMs), such as GPT-4o, still fall short of human-level performance, particularly in intricate web environments and long-horizon planning tasks. To address these limitations, we introduce Reflective Monte Carlo Tree Search (R-MCTS), a novel test-time algorithm designed to enhance the ability of AI agents, e.g., powered by GPT-4o, to explore decision space on the fly. R-MCTS extends traditional MCTS by 1) incorporating contrastive reflection, allowing agents to learn from past interactions and dynamically improve their search efficiency; and 2) using multi-agent debate to provide reliable state evaluation. Moreover, we improve the agent's performance by fine-tuning GPT-4o through self-learning, using R-MCTS generated tree traversals without any human-provided labels. On the challenging VisualWebArena benchmark, our GPT-4o-based R-MCTS agent achieves a 6% to 30% relative improvement across various tasks compared to the previous state-of-the-art. Additionally, we show that the knowledge gained from test-time search can be effectively transferred back to GPT-4o via fine-tuning. The fine-tuned GPT-4o matches 97% of R-MCTS's performance while reducing compute usage by a factor of four at test time. Furthermore, qualitative results reveal that the fine-tuned GPT-4o model demonstrates the ability to explore the environment, evaluate a state, and backtrack to viable ones when it detects that the current state cannot lead to success. Moreover, our work demonstrates the compute scaling properties in both training - data collection with R-MCTS - and testing time. These results suggest a promising research direction to enhance VLMs' reasoning and planning capabilities for agentic applications via test-time search and self-learning.<|reference_end|> | arxiv | @article{yu2024exact:,
title={ExACT: Teaching AI Agents to Explore with Reflective-MCTS and
Exploratory Learning},
author={Xiao Yu, Baolin Peng, Vineeth Vajipey, Hao Cheng, Michel Galley,
Jianfeng Gao and Zhou Yu},
journal={arXiv preprint arXiv:2410.02052},
year={2024},
archivePrefix={arXiv},
eprint={2410.02052},
primaryClass={cs.CL cs.CV}
} | yu2024exact: |
arxiv-664802 | 2410.02053 | Digital Eyes: Social Implications of XR EyeSight | <|reference_start|>Digital Eyes: Social Implications of XR EyeSight: The EyeSight feature, introduced with the new Apple Vision Pro XR headset, promises to revolutionize user interaction by simulating real human eye expressions on a digital display. This feature could enhance XR devices' social acceptability and social presence when communicating with others outside the XR experience. In this pilot study, we explore the implications of the EyeSight feature by examining social acceptability, social presence, emotional responses, and technology acceptance. Eight participants engaged in conversational tasks in three conditions to contrast experiencing the Apple Vision Pro with EyeSight, the Meta Quest 3 as a reference XR headset, and a face-to-face setting. Our preliminary findings indicate that while the EyeSight feature improves perceptions of social presence and acceptability compared to the reference headsets, it does not match the social connectivity of direct human interactions.<|reference_end|> | arxiv | @article{vergari2024digital,
title={Digital Eyes: Social Implications of XR EyeSight},
author={Maurizio Vergari, Tanja Koji'c, Wafaa Wardah, Maximilian Warsinke,
Sebastian M"oller, Jan-Niklas Voigt-Antons, Robert P. Spang},
journal={arXiv preprint arXiv:2410.02053},
year={2024},
doi={10.1145/3641825.3689526},
archivePrefix={arXiv},
eprint={2410.02053},
primaryClass={cs.HC}
} | vergari2024digital |
arxiv-664803 | 2410.02054 | Comparing Criteria Development Across Domain Experts, Lay Users, and Models in Large Language Model Evaluation | <|reference_start|>Comparing Criteria Development Across Domain Experts, Lay Users, and Models in Large Language Model Evaluation: Large Language Models (LLMs) are increasingly utilized for domain-specific tasks, yet integrating domain expertise into evaluating their outputs remains challenging. A common approach to evaluating LLMs is to use metrics, or criteria, which are assertions used to assess performance that help ensure that their outputs align with domain-specific standards. Previous efforts have involved developers, lay users, or the LLMs themselves in creating these criteria, however, evaluation particularly from a domain expertise perspective, remains understudied. This study explores how domain experts contribute to LLM evaluation by comparing their criteria with those generated by LLMs and lay users. We further investigate how the criteria-setting process evolves, analyzing changes between a priori and a posteriori stages. Our findings emphasize the importance of involving domain experts early in the evaluation process while utilizing complementary strengths of lay users and LLMs. We suggest implications for designing workflows that leverage these strengths at different evaluation stages.<|reference_end|> | arxiv | @article{szymanski2024comparing,
title={Comparing Criteria Development Across Domain Experts, Lay Users, and
Models in Large Language Model Evaluation},
author={Annalisa Szymanski, Simret Araya Gebreegziabher, Oghenemaro Anuyah,
Ronald A. Metoyer, Toby Jia-Jun Li},
journal={arXiv preprint arXiv:2410.02054},
year={2024},
archivePrefix={arXiv},
eprint={2410.02054},
primaryClass={cs.HC}
} | szymanski2024comparing |
arxiv-664804 | 2410.02055 | Using Style Ambiguity Loss to Improve Aesthetics of Diffusion Models | <|reference_start|>Using Style Ambiguity Loss to Improve Aesthetics of Diffusion Models: Teaching text-to-image models to be creative involves using style ambiguity loss. In this work, we explore using the style ambiguity training objective, used to approximate creativity, on a diffusion model. We then experiment with forms of style ambiguity loss that do not require training a classifier or a labeled dataset, and find that the models trained with style ambiguity loss can generate better images than the baseline diffusion models and GANs. Code is available at https://github.com/jamesBaker361/clipcreate.<|reference_end|> | arxiv | @article{baker2024using,
title={Using Style Ambiguity Loss to Improve Aesthetics of Diffusion Models},
author={James Baker},
journal={arXiv preprint arXiv:2410.02055},
year={2024},
archivePrefix={arXiv},
eprint={2410.02055},
primaryClass={cs.CV}
} | baker2024using |
arxiv-664805 | 2410.02056 | Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data | <|reference_start|>Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data: We present Synthio, a novel approach for augmenting small-scale audio classification datasets with synthetic data. Our goal is to improve audio classification accuracy with limited labeled data. Traditional data augmentation techniques, which apply artificial transformations (e.g., adding random noise or masking segments), struggle to create data that captures the true diversity present in real-world audios. To address this shortcoming, we propose to augment the dataset with synthetic audio generated from text-to-audio (T2A) diffusion models. However, synthesizing effective augmentations is challenging because not only should the generated data be acoustically consistent with the underlying small-scale dataset, but they should also have sufficient compositional diversity. To overcome the first challenge, we align the generations of the T2A model with the small-scale dataset using preference optimization. This ensures that the acoustic characteristics of the generated data remain consistent with the small-scale dataset. To address the second challenge, we propose a novel caption generation technique that leverages the reasoning capabilities of Large Language Models to (1) generate diverse and meaningful audio captions and (2) iteratively refine their quality. The generated captions are then used to prompt the aligned T2A model. We extensively evaluate Synthio on ten datasets and four simulated limited-data settings. Results indicate our method consistently outperforms all baselines by 0.1%-39% using a T2A model trained only on weakly-captioned AudioSet.<|reference_end|> | arxiv | @article{ghosh2024synthio:,
title={Synthio: Augmenting Small-Scale Audio Classification Datasets with
Synthetic Data},
author={Sreyan Ghosh and Sonal Kumar and Zhifeng Kong and Rafael Valle and
Bryan Catanzaro and Dinesh Manocha},
journal={arXiv preprint arXiv:2410.02056},
year={2024},
archivePrefix={arXiv},
eprint={2410.02056},
primaryClass={eess.AS cs.AI cs.CL}
} | ghosh2024synthio: |
arxiv-664806 | 2410.02060 | PerTok: Expressive Encoding and Modeling of Symbolic Musical Ideas and Variations | <|reference_start|>PerTok: Expressive Encoding and Modeling of Symbolic Musical Ideas and Variations: We introduce Cadenza, a new multi-stage generative framework for predicting expressive variations of symbolic musical ideas as well as unconditional generations. To accomplish this we propose a novel MIDI encoding method, PerTok (Performance Tokenizer) that captures minute expressive details whilst reducing sequence length up to 59% and vocabulary size up to 95% for polyphonic, monophonic and rhythmic tasks. The proposed framework comprises of two sequential stages: 1) Composer and 2) Performer. The Composer model is a transformer-based Variational Autoencoder (VAE), with Rotary Positional Embeddings (RoPE)ROPE and an autoregressive decoder modified to more effectively integrate the latent codes of the input musical idea. The Performer model is a bidirectional transformer encoder that is separately trained to predict velocities and microtimings on MIDI sequences. Objective and human evaluations demonstrate Cadenza's versatile capability in 1) matching other unconditional state-of-the-art symbolic models in musical quality whilst sounding more expressive, and 2) composing new, expressive ideas that are both stylistically related to the input whilst providing novel ideas to the user. Our framework is designed, researched and implemented with the objective of ethically providing inspiration for musicians.<|reference_end|> | arxiv | @article{lenz2024pertok:,
title={PerTok: Expressive Encoding and Modeling of Symbolic Musical Ideas and
Variations},
author={Julian Lenz, Anirudh Mani},
journal={arXiv preprint arXiv:2410.02060},
year={2024},
archivePrefix={arXiv},
eprint={2410.02060},
primaryClass={cs.SD cs.LG eess.AS}
} | lenz2024pertok: |
arxiv-664807 | 2410.02062 | TPP-LLM: Modeling Temporal Point Processes by Efficiently Fine-Tuning Large Language Models | <|reference_start|>TPP-LLM: Modeling Temporal Point Processes by Efficiently Fine-Tuning Large Language Models: Temporal point processes (TPPs) are widely used to model the timing and occurrence of events in domains such as social networks, transportation systems, and e-commerce. In this paper, we introduce TPP-LLM, a novel framework that integrates large language models (LLMs) with TPPs to capture both the semantic and temporal aspects of event sequences. Unlike traditional methods that rely on categorical event type representations, TPP-LLM directly utilizes the textual descriptions of event types, enabling the model to capture rich semantic information embedded in the text. While LLMs excel at understanding event semantics, they are less adept at capturing temporal patterns. To address this, TPP-LLM incorporates temporal embeddings and employs parameter-efficient fine-tuning (PEFT) methods to effectively learn temporal dynamics without extensive retraining. This approach improves both predictive accuracy and computational efficiency. Experimental results across diverse real-world datasets demonstrate that TPP-LLM outperforms state-of-the-art baselines in sequence modeling and event prediction, highlighting the benefits of combining LLMs with TPPs.<|reference_end|> | arxiv | @article{liu2024tpp-llm:,
title={TPP-LLM: Modeling Temporal Point Processes by Efficiently Fine-Tuning
Large Language Models},
author={Zefang Liu, Yinzhu Quan},
journal={arXiv preprint arXiv:2410.02062},
year={2024},
archivePrefix={arXiv},
eprint={2410.02062},
primaryClass={cs.LG cs.CL}
} | liu2024tpp-llm: |
arxiv-664808 | 2410.02064 | Inspection and Control of Self-Generated-Text Recognition Ability in Llama3-8b-Instruct | <|reference_start|>Inspection and Control of Self-Generated-Text Recognition Ability in Llama3-8b-Instruct: It has been reported that LLMs can recognize their own writing. As this has potential implications for AI safety, yet is relatively understudied, we investigate the phenomenon, seeking to establish whether it robustly occurs at the behavioral level, how the observed behavior is achieved, and whether it can be controlled. First, we find that the Llama3-8b-Instruct chat model - but not the base Llama3-8b model - can reliably distinguish its own outputs from those of humans, and present evidence that the chat model is likely using its experience with its own outputs, acquired during post-training, to succeed at the writing recognition task. Second, we identify a vector in the residual stream of the model that is differentially activated when the model makes a correct self-written-text recognition judgment, show that the vector activates in response to information relevant to self-authorship, present evidence that the vector is related to the concept of "self" in the model, and demonstrate that the vector is causally related to the model's ability to perceive and assert self-authorship. Finally, we show that the vector can be used to control both the model's behavior and its perception, steering the model to claim or disclaim authorship by applying the vector to the model's output as it generates it, and steering the model to believe or disbelieve it wrote arbitrary texts by applying the vector to them as the model reads them.<|reference_end|> | arxiv | @article{ackerman2024inspection,
title={Inspection and Control of Self-Generated-Text Recognition Ability in
Llama3-8b-Instruct},
author={Christopher Ackerman and Nina Panickssery},
journal={arXiv preprint arXiv:2410.02064},
year={2024},
archivePrefix={arXiv},
eprint={2410.02064},
primaryClass={cs.LG cs.CL}
} | ackerman2024inspection |
arxiv-664809 | 2410.02067 | DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation | <|reference_start|>DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation: In the realm of image generation, creating customized images from visual prompt with additional textual instruction emerges as a promising endeavor. However, existing methods, both tuning-based and tuning-free, struggle with interpreting the subject-essential attributes from the visual prompt. This leads to subject-irrelevant attributes infiltrating the generation process, ultimately compromising the personalization quality in both editability and ID preservation. In this paper, we present DisEnvisioner, a novel approach for effectively extracting and enriching the subject-essential features while filtering out -irrelevant information, enabling exceptional customization performance, in a tuning-free manner and using only a single image. Specifically, the feature of the subject and other irrelevant components are effectively separated into distinctive visual tokens, enabling a much more accurate customization. Aiming to further improving the ID consistency, we enrich the disentangled features, sculpting them into more granular representations. Experiments demonstrate the superiority of our approach over existing methods in instruction response (editability), ID consistency, inference speed, and the overall image quality, highlighting the effectiveness and efficiency of DisEnvisioner. Project page: https://disenvisioner.github.io/.<|reference_end|> | arxiv | @article{he2024disenvisioner:,
title={DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized
Image Generation},
author={Jing He, Haodong Li, Yongzhe Hu, Guibao Shen, Yingjie Cai, Weichao
Qiu, Ying-Cong Chen},
journal={arXiv preprint arXiv:2410.02067},
year={2024},
archivePrefix={arXiv},
eprint={2410.02067},
primaryClass={cs.CV}
} | he2024disenvisioner: |
arxiv-664810 | 2410.02068 | Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits | <|reference_start|>Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits: We study how representation learning can improve the learning efficiency of contextual bandit problems. We study the setting where we play T contextual linear bandits with dimension d simultaneously, and these T bandit tasks collectively share a common linear representation with a dimensionality of r much smaller than d. We present a new algorithm based on alternating projected gradient descent (GD) and minimization estimator to recover a low-rank feature matrix. Using the proposed estimator, we present a multi-task learning algorithm for linear contextual bandits and prove the regret bound of our algorithm. We presented experiments and compared the performance of our algorithm against benchmark algorithms.<|reference_end|> | arxiv | @article{lin2024fast,
title={Fast and Sample Efficient Multi-Task Representation Learning in
Stochastic Contextual Bandits},
author={Jiabin Lin, Shana Moothedath, Namrata Vaswani},
journal={arXiv preprint arXiv:2410.02068},
year={2024},
archivePrefix={arXiv},
eprint={2410.02068},
primaryClass={cs.LG stat.ML}
} | lin2024fast |
arxiv-664811 | 2410.02069 | Semi-Supervised Fine-Tuning of Vision Foundation Models with Content-Style Decomposition | <|reference_start|>Semi-Supervised Fine-Tuning of Vision Foundation Models with Content-Style Decomposition: In this paper, we present a semi-supervised fine-tuning approach designed to improve the performance of pre-trained foundation models on downstream tasks with limited labeled data. By leveraging content-style decomposition within an information-theoretic framework, our method enhances the latent representations of pre-trained vision foundation models, aligning them more effectively with specific task objectives and addressing the problem of distribution shift. We evaluate our approach on multiple datasets, including MNIST, its augmented variations (with yellow and white stripes), CIFAR-10, SVHN, and GalaxyMNIST. The experiments show improvements over supervised finetuning baseline of pre-trained models, particularly in low-labeled data regimes, across both frozen and trainable backbones for the majority of the tested datasets.<|reference_end|> | arxiv | @article{drozdova2024semi-supervised,
title={Semi-Supervised Fine-Tuning of Vision Foundation Models with
Content-Style Decomposition},
author={Mariia Drozdova, Vitaliy Kinakh, Yury Belousov, Erica Lastufka, Slava
Voloshynovskiy},
journal={arXiv preprint arXiv:2410.02069},
year={2024},
archivePrefix={arXiv},
eprint={2410.02069},
primaryClass={cs.CV cs.LG}
} | drozdova2024semi-supervised |
arxiv-664812 | 2410.02070 | MMFNet: Multi-Scale Frequency Masking Neural Network for Multivariate Time Series Forecasting | <|reference_start|>MMFNet: Multi-Scale Frequency Masking Neural Network for Multivariate Time Series Forecasting: Long-term Time Series Forecasting (LTSF) is critical for numerous real-world applications, such as electricity consumption planning, financial forecasting, and disease propagation analysis. LTSF requires capturing long-range dependencies between inputs and outputs, which poses significant challenges due to complex temporal dynamics and high computational demands. While linear models reduce model complexity by employing frequency domain decomposition, current approaches often assume stationarity and filter out high-frequency components that may contain crucial short-term fluctuations. In this paper, we introduce MMFNet, a novel model designed to enhance long-term multivariate forecasting by leveraging a multi-scale masked frequency decomposition approach. MMFNet captures fine, intermediate, and coarse-grained temporal patterns by converting time series into frequency segments at varying scales while employing a learnable mask to filter out irrelevant components adaptively. Extensive experimentation with benchmark datasets shows that MMFNet not only addresses the limitations of the existing methods but also consistently achieves good performance. Specifically, MMFNet achieves up to 6.0% reductions in the Mean Squared Error (MSE) compared to state-of-the-art models designed for multivariate forecasting tasks.<|reference_end|> | arxiv | @article{ma2024mmfnet:,
title={MMFNet: Multi-Scale Frequency Masking Neural Network for Multivariate
Time Series Forecasting},
author={Aitian Ma, Dongsheng Luo, Mo Sha},
journal={arXiv preprint arXiv:2410.02070},
year={2024},
archivePrefix={arXiv},
eprint={2410.02070},
primaryClass={cs.LG}
} | ma2024mmfnet: |
arxiv-664813 | 2410.02071 | Estimating Disaster Resilience of Hurricane Helene on Florida Counties | <|reference_start|>Estimating Disaster Resilience of Hurricane Helene on Florida Counties: This paper presents a rapid approach to assessing disaster resilience in Florida, particularly regarding Hurricane Helene (2024). This category four storm made landfall on Florida's Gulf Coast in September 2024. Using the Disaster Resilience Index (DRI) developed in this paper, the preparedness and adaptive capacities of communities across counties in Florida are evaluated, identifying the most resilient areas based on three key variables: population size, average per-person income, and the Social Vulnerability Index (SVI). While the Social Vulnerability Index (SVI) accounts for factors like socioeconomic status, household composition, minority status, and housing conditions-key elements in determining a community's resilience to disasters-incorporating a county's population and per person income provides additional insight. A county's total population is directly linked to the number of individuals impacted by a disaster, while personal income reflects a household's capacity to recover. Spatial analysis was performed on the index to compare the vulnerability and resilience levels across thirty-four counties vulnerable to Hurricane Helene's projected path. The results highlight that counties with high income and lower population densities, such as Monroe and Collier, exhibit greater resilience. In contrast, areas with larger populations and higher social vulnerabilities are at greater risk of damage. This study contributes to disaster management planning by providing a rapid yet comprehensive and reassuring socioeconomic impact assessment, offering actionable insights for anticipatory measures and resource allocation.<|reference_end|> | arxiv | @article{basu2024estimating,
title={Estimating Disaster Resilience of Hurricane Helene on Florida Counties},
author={Reetwika Basu, Siddharth Chaudhary, Chinmay Deval, Alqamah Sayeed,
Kelsey Herndon, Robert Griffin},
journal={arXiv preprint arXiv:2410.02071},
year={2024},
archivePrefix={arXiv},
eprint={2410.02071},
primaryClass={cs.SI}
} | basu2024estimating |
arxiv-664814 | 2410.02072 | Learning from the Giants: A Practical Approach to Underwater Depth and Surface Normals Estimation | <|reference_start|>Learning from the Giants: A Practical Approach to Underwater Depth and Surface Normals Estimation: Monocular Depth and Surface Normals Estimation (MDSNE) is crucial for tasks such as 3D reconstruction, autonomous navigation, and underwater exploration. Current methods rely either on discriminative models, which struggle with transparent or reflective surfaces, or generative models, which, while accurate, are computationally expensive. This paper presents a novel deep learning model for MDSNE, specifically tailored for underwater environments, using a hybrid architecture that integrates Convolutional Neural Networks (CNNs) with Transformers, leveraging the strengths of both approaches. Training effective MDSNE models is often hampered by noisy real-world datasets and the limited generalization of synthetic datasets. To address this, we generate pseudo-labeled real data using multiple pre-trained MDSNE models. To ensure the quality of this data, we propose the Depth Normal Evaluation and Selection Algorithm (DNESA), which evaluates and selects the most reliable pseudo-labeled samples using domain-specific metrics. A lightweight student model is then trained on this curated dataset. Our model reduces parameters by 90% and training costs by 80%, allowing real-time 3D perception on resource-constrained devices. Key contributions include: a novel and efficient MDSNE model, the DNESA algorithm, a domain-specific data pipeline, and a focus on real-time performance and scalability. Designed for real-world underwater applications, our model facilitates low-cost deployments in underwater robots and autonomous vehicles, bridging the gap between research and practical implementation.<|reference_end|> | arxiv | @article{saleh2024learning,
title={Learning from the Giants: A Practical Approach to Underwater Depth and
Surface Normals Estimation},
author={Alzayat Saleh, Melanie Olsen, Bouchra Senadji and Mostafa Rahimi
Azghadi},
journal={arXiv preprint arXiv:2410.02072},
year={2024},
archivePrefix={arXiv},
eprint={2410.02072},
primaryClass={cs.CV}
} | saleh2024learning |
arxiv-664815 | 2410.02073 | Depth Pro: Sharp Monocular Metric Depth in Less Than a Second | <|reference_start|>Depth Pro: Sharp Monocular Metric Depth in Less Than a Second: We present a foundation model for zero-shot metric monocular depth estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image. Extensive experiments analyze specific design choices and demonstrate that Depth Pro outperforms prior work along multiple dimensions. We release code and weights at https://github.com/apple/ml-depth-pro<|reference_end|> | arxiv | @article{bochkovskii2024depth,
title={Depth Pro: Sharp Monocular Metric Depth in Less Than a Second},
author={Aleksei Bochkovskii and Ama"el Delaunoy and Hugo Germain and Marcel
Santos and Yichao Zhou and Stephan R. Richter and Vladlen Koltun},
journal={arXiv preprint arXiv:2410.02073},
year={2024},
archivePrefix={arXiv},
eprint={2410.02073},
primaryClass={cs.CV cs.LG}
} | bochkovskii2024depth |
arxiv-664816 | 2410.02074 | Price-guided user attention in large-scale E-commerce group recommendation | <|reference_start|>Price-guided user attention in large-scale E-commerce group recommendation: Existing group recommender systems utilize attention mechanisms to identify critical users who influence group decisions the most. We analyzed user attention scores from a widely-used group recommendation model on a real-world E-commerce dataset and found that item price and user interaction history significantly influence the selection of critical users. When item prices are low, users with extensive interaction histories are more influential in group decision-making. Conversely, their influence diminishes with higher item prices. Based on these observations, we propose a novel group recommendation approach that incorporates item price as a guiding factor for user aggregation. Our model employs an adaptive sigmoid function to adjust output logits based on item prices, enhancing the accuracy of user aggregation. Our model can be plugged into any attention-based group recommender system if the price information is available. We evaluate our model's performance on a public benchmark and a real-world dataset. We compare it with other state-of-the-art group recommendation methods. Our results demonstrate that our price-guided user attention approach outperforms the state-of-the-art methods in terms of hit ratio and mean square error.<|reference_end|> | arxiv | @article{shi2024price-guided,
title={Price-guided user attention in large-scale E-commerce group
recommendation},
author={Yang Shi, Young-joo Chung},
journal={arXiv preprint arXiv:2410.02074},
year={2024},
archivePrefix={arXiv},
eprint={2410.02074},
primaryClass={cs.IR cs.LG}
} | shi2024price-guided |
arxiv-664817 | 2410.02077 | Kolmogorov-Arnold Network Autoencoders | <|reference_start|>Kolmogorov-Arnold Network Autoencoders: Deep learning models have revolutionized various domains, with Multi-Layer Perceptrons (MLPs) being a cornerstone for tasks like data regression and image classification. However, a recent study has introduced Kolmogorov-Arnold Networks (KANs) as promising alternatives to MLPs, leveraging activation functions placed on edges rather than nodes. This structural shift aligns KANs closely with the Kolmogorov-Arnold representation theorem, potentially enhancing both model accuracy and interpretability. In this study, we explore the efficacy of KANs in the context of data representation via autoencoders, comparing their performance with traditional Convolutional Neural Networks (CNNs) on the MNIST, SVHN, and CIFAR-10 datasets. Our results demonstrate that KAN-based autoencoders achieve competitive performance in terms of reconstruction accuracy, thereby suggesting their viability as effective tools in data analysis tasks.<|reference_end|> | arxiv | @article{moradi2024kolmogorov-arnold,
title={Kolmogorov-Arnold Network Autoencoders},
author={Mohammadamin Moradi, Shirin Panahi, Erik Bollt, Ying-Cheng Lai},
journal={arXiv preprint arXiv:2410.02077},
year={2024},
archivePrefix={arXiv},
eprint={2410.02077},
primaryClass={cs.LG cs.AI cs.CV}
} | moradi2024kolmogorov-arnold |
arxiv-664818 | 2410.02078 | Posterior sampling via Langevin dynamics based on generative priors | <|reference_start|>Posterior sampling via Langevin dynamics based on generative priors: Posterior sampling in high-dimensional spaces using generative models holds significant promise for various applications, including but not limited to inverse problems and guided generation tasks. Despite many recent developments, generating diverse posterior samples remains a challenge, as existing methods require restarting the entire generative process for each new sample, making the procedure computationally expensive. In this work, we propose efficient posterior sampling by simulating Langevin dynamics in the noise space of a pre-trained generative model. By exploiting the mapping between the noise and data spaces which can be provided by distilled flows or consistency models, our method enables seamless exploration of the posterior without the need to re-run the full sampling chain, drastically reducing computational overhead. Theoretically, we prove a guarantee for the proposed noise-space Langevin dynamics to approximate the posterior, assuming that the generative model sufficiently approximates the prior distribution. Our framework is experimentally validated on image restoration tasks involving noisy linear and nonlinear forward operators applied to LSUN-Bedroom (256 x 256) and ImageNet (64 x 64) datasets. The results demonstrate that our approach generates high-fidelity samples with enhanced semantic diversity even under a limited number of function evaluations, offering superior efficiency and performance compared to existing diffusion-based posterior sampling techniques.<|reference_end|> | arxiv | @article{purohit2024posterior,
title={Posterior sampling via Langevin dynamics based on generative priors},
author={Vishal Purohit, Matthew Repasky, Jianfeng Lu, Qiang Qiu, Yao Xie,
Xiuyuan Cheng},
journal={arXiv preprint arXiv:2410.02078},
year={2024},
archivePrefix={arXiv},
eprint={2410.02078},
primaryClass={stat.ML cs.CV cs.LG}
} | purohit2024posterior |
arxiv-664819 | 2410.02079 | Deep Generative Modeling for Identification of Noisy, Non-Stationary Dynamical Systems | <|reference_start|>Deep Generative Modeling for Identification of Noisy, Non-Stationary Dynamical Systems: A significant challenge in many fields of science and engineering is making sense of time-dependent measurement data by recovering governing equations in the form of differential equations. We focus on finding parsimonious ordinary differential equation (ODE) models for nonlinear, noisy, and non-autonomous dynamical systems and propose a machine learning method for data-driven system identification. While many methods tackle noisy and limited data, non-stationarity - where differential equation parameters change over time - has received less attention. Our method, dynamic SINDy, combines variational inference with SINDy (sparse identification of nonlinear dynamics) to model time-varying coefficients of sparse ODEs. This framework allows for uncertainty quantification of ODE coefficients, expanding on previous methods for autonomous systems. These coefficients are then interpreted as latent variables and added to the system to obtain an autonomous dynamical model. We validate our approach using synthetic data, including nonlinear oscillators and the Lorenz system, and apply it to neuronal activity data from C. elegans. Dynamic SINDy uncovers a global nonlinear model, showing it can handle real, noisy, and chaotic datasets. We aim to apply our method to a variety of problems, specifically dynamic systems with complex time-dependent parameters.<|reference_end|> | arxiv | @article{voina2024deep,
title={Deep Generative Modeling for Identification of Noisy, Non-Stationary
Dynamical Systems},
author={Doris Voina and Steven Brunton and J. Nathan Kutz},
journal={arXiv preprint arXiv:2410.02079},
year={2024},
archivePrefix={arXiv},
eprint={2410.02079},
primaryClass={cs.LG q-bio.QM}
} | voina2024deep |
arxiv-664820 | 2410.02080 | EMMA: Efficient Visual Alignment in Multi-Modal LLMs | <|reference_start|>EMMA: Efficient Visual Alignment in Multi-Modal LLMs: Multi-modal Large Language Models (MLLMs) have recently exhibited impressive general-purpose capabilities by leveraging vision foundation models to encode the core concepts of images into representations. These are then combined with instructions and processed by the language model to generate high-quality responses. Despite significant progress in enhancing the language component, challenges persist in optimally fusing visual encodings within the language model for task-specific adaptability. Recent research has focused on improving this fusion through modality adaptation modules but at the cost of significantly increased model complexity and training data needs. In this paper, we propose EMMA (Efficient Multi-Modal Adaptation), a lightweight cross-modality module designed to efficiently fuse visual and textual encodings, generating instruction-aware visual representations for the language model. Our key contributions include: (1) an efficient early fusion mechanism that integrates vision and language representations with minimal added parameters (less than 0.2% increase in model size), (2) an in-depth interpretability analysis that sheds light on the internal mechanisms of the proposed method; (3) comprehensive experiments that demonstrate notable improvements on both specialized and general benchmarks for MLLMs. Empirical results show that EMMA boosts performance across multiple tasks by up to 9.3% while significantly improving robustness against hallucinations. Our code is available at https://github.com/SaraGhazanfari/EMMA<|reference_end|> | arxiv | @article{ghazanfari2024emma:,
title={EMMA: Efficient Visual Alignment in Multi-Modal LLMs},
author={Sara Ghazanfari, Alexandre Araujo, Prashanth Krishnamurthy, Siddharth
Garg, Farshad Khorrami},
journal={arXiv preprint arXiv:2410.02080},
year={2024},
archivePrefix={arXiv},
eprint={2410.02080},
primaryClass={cs.CV cs.CL cs.LG}
} | ghazanfari2024emma: |
arxiv-664821 | 2410.02081 | MixLinear: Extreme Low Resource Multivariate Time Series Forecasting with 01K Parameters | <|reference_start|>MixLinear: Extreme Low Resource Multivariate Time Series Forecasting with 01K Parameters: Recently, there has been a growing interest in Long-term Time Series Forecasting (LTSF), which involves predicting long-term future values by analyzing a large amount of historical time-series data to identify patterns and trends. There exist significant challenges in LTSF due to its complex temporal dependencies and high computational demands. Although Transformer-based models offer high forecasting accuracy, they are often too compute-intensive to be deployed on devices with hardware constraints. On the other hand, the linear models aim to reduce the computational overhead by employing either decomposition methods in the time domain or compact representations in the frequency domain. In this paper, we propose MixLinear, an ultra-lightweight multivariate time series forecasting model specifically designed for resource-constrained devices. MixLinear effectively captures both temporal and frequency domain features by modeling intra-segment and inter-segment variations in the time domain and extracting frequency variations from a low-dimensional latent space in the frequency domain. By reducing the parameter scale of a downsampled $n$-length input/output one-layer linear model from $O(n^2)$ to $O(n)$, MixLinear achieves efficient computation without sacrificing accuracy. Extensive evaluations with four benchmark datasets show that MixLinear attains forecasting performance comparable to, or surpassing, state-of-the-art models with significantly fewer parameters ($0.1K$), which makes it well-suited for deployment on devices with limited computational capacity.<|reference_end|> | arxiv | @article{ma2024mixlinear:,
title={MixLinear: Extreme Low Resource Multivariate Time Series Forecasting
with 0.1K Parameters},
author={Aitian Ma, Dongsheng Luo, Mo Sha},
journal={arXiv preprint arXiv:2410.02081},
year={2024},
archivePrefix={arXiv},
eprint={2410.02081},
primaryClass={cs.LG}
} | ma2024mixlinear: |
arxiv-664822 | 2410.02082 | FARM: Functional Group-Aware Representations for Small Molecules | <|reference_start|>FARM: Functional Group-Aware Representations for Small Molecules: We introduce Functional Group-Aware Representations for Small Molecules (FARM), a novel foundation model designed to bridge the gap between SMILES, natural language, and molecular graphs. The key innovation of FARM lies in its functional group-aware tokenization, which directly incorporates functional group information into the representations. This strategic reduction in tokenization granularity is intentionally aligned with key drivers of functional properties (i.e., functional groups), enhancing the model's understanding of chemical language. By expanding the chemical lexicon, FARM more effectively bridges SMILES and natural language, ultimately advancing the model's capacity to predict molecular properties. FARM also represents molecules from two perspectives: by using masked language modeling to capture atom-level features and by employing graph neural networks to encode the whole molecule topology. By leveraging contrastive learning, FARM aligns these two views of representations into a unified molecular embedding. We rigorously evaluate FARM on the MoleculeNet dataset, where it achieves state-of-the-art performance on 10 out of 12 tasks. These results highlight FARM's potential to improve molecular representation learning, with promising applications in drug discovery and pharmaceutical research.<|reference_end|> | arxiv | @article{nguyen2024farm:,
title={FARM: Functional Group-Aware Representations for Small Molecules},
author={Thao Nguyen, Kuan-Hao Huang, Ge Liu, Martin D. Burke, Ying Diao, Heng
Ji},
journal={arXiv preprint arXiv:2410.02082},
year={2024},
archivePrefix={arXiv},
eprint={2410.02082},
primaryClass={cs.LG q-bio.QM}
} | nguyen2024farm: |
arxiv-664823 | 2410.02084 | Generating Symbolic Music from Natural Language Prompts using an LLM-Enhanced Dataset | <|reference_start|>Generating Symbolic Music from Natural Language Prompts using an LLM-Enhanced Dataset: Recent years have seen many audio-domain text-to-music generation models that rely on large amounts of text-audio pairs for training. However, symbolic-domain controllable music generation has lagged behind partly due to the lack of a large-scale symbolic music dataset with extensive metadata and captions. In this work, we present MetaScore, a new dataset consisting of 963K musical scores paired with rich metadata, including free-form user-annotated tags, collected from an online music forum. To approach text-to-music generation, we leverage a pretrained large language model (LLM) to generate pseudo natural language captions from the metadata. With the LLM-enhanced MetaScore, we train a text-conditioned music generation model that learns to generate symbolic music from the pseudo captions, allowing control of instruments, genre, composer, complexity and other free-form music descriptors. In addition, we train a tag-conditioned system that supports a predefined set of tags available in MetaScore. Our experimental results show that both the proposed text-to-music and tags-to-music models outperform a baseline text-to-music model in a listening test, while the text-based system offers a more natural interface that allows free-form natural language prompts.<|reference_end|> | arxiv | @article{xu2024generating,
title={Generating Symbolic Music from Natural Language Prompts using an
LLM-Enhanced Dataset},
author={Weihan Xu, Julian McAuley, Taylor Berg-Kirkpatrick, Shlomo Dubnov,
Hao-Wen Dong},
journal={arXiv preprint arXiv:2410.02084},
year={2024},
archivePrefix={arXiv},
eprint={2410.02084},
primaryClass={cs.SD eess.AS}
} | xu2024generating |
arxiv-664824 | 2410.02085 | Multi-Omic and Quantum Machine Learning Integration for Lung Subtypes Classification | <|reference_start|>Multi-Omic and Quantum Machine Learning Integration for Lung Subtypes Classification: Quantum Machine Learning (QML) is a red-hot field that brings novel discoveries and exciting opportunities to resolve, speed up, or refine the analysis of a wide range of computational problems. In the realm of biomedical research and personalized medicine, the significance of multi-omics integration lies in its ability to provide a thorough and holistic comprehension of complex biological systems. This technology links fundamental research to clinical practice. The insights gained from integrated omics data can be translated into clinical tools for diagnosis, prognosis, and treatment planning. The fusion of quantum computing and machine learning holds promise for unraveling complex patterns within multi-omics datasets, providing unprecedented insights into the molecular landscape of lung cancer. Due to the heterogeneity, complexity, and high dimensionality of multi-omic cancer data, characterized by the vast number of features (such as gene expression, micro-RNA, and DNA methylation) relative to the limited number of lung cancer patient samples, our prime motivation for this paper is the integration of multi-omic data, unique feature selection, and diagnostic classification of lung subtypes: lung squamous cell carcinoma (LUSC-I) and lung adenocarcinoma (LUAD-II) using quantum machine learning. We developed a method for finding the best differentiating features between LUAD and LUSC datasets, which has the potential for biomarker discovery.<|reference_end|> | arxiv | @article{saggi2024multi-omic,
title={Multi-Omic and Quantum Machine Learning Integration for Lung Subtypes
Classification},
author={Mandeep Kaur Saggi, Amandeep Singh Bhatia, Mensah Isaiah, Humaira
Gowher, Sabre Kais},
journal={arXiv preprint arXiv:2410.02085},
year={2024},
archivePrefix={arXiv},
eprint={2410.02085},
primaryClass={cs.LG cs.AI q-bio.GN quant-ph}
} | saggi2024multi-omic |
arxiv-664825 | 2410.02086 | Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations | <|reference_start|>Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations: Multimodal learning plays a crucial role in enabling machine learning models to fuse and utilize diverse data sources, such as text, images, and audio, to support a variety of downstream tasks. A unified representation across various modalities is particularly important for improving efficiency and performance. Recent binding methods, such as ImageBind (Girdhar et al., 2023), typically use a fixed anchor modality to align multimodal data in the anchor modal embedding space. In this paper, we mathematically analyze the fixed anchor binding methods and uncover notable limitations: (1) over-reliance on the choice of the anchor modality, (2) failure to capture intra-modal information, and (3) failure to account for inter-modal correlation among non-anchored modalities. To address these limitations, we propose CentroBind, a simple yet powerful approach that eliminates the need for a fixed anchor; instead, it employs dynamically adjustable centroid-based anchors generated from all available modalities, resulting in a balanced and rich representation space. We theoretically demonstrate that our method captures three crucial properties of multimodal learning: intra-modal learning, inter-modal learning, and multimodal alignment, while also constructing a robust unified representation across all modalities. Our experiments on both synthetic and real-world datasets demonstrate the superiority of the proposed method, showing that dynamic anchor methods outperform all fixed anchor binding methods as the former captures more nuanced multimodal interactions.<|reference_end|> | arxiv | @article{jeong2024anchors,
title={Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations},
author={Minoh Jeong, Min Namgung, Zae Myung Kim, Dongyeop Kang, Yao-Yi Chiang,
Alfred Hero},
journal={arXiv preprint arXiv:2410.02086},
year={2024},
archivePrefix={arXiv},
eprint={2410.02086},
primaryClass={cs.LG cs.CV stat.ML}
} | jeong2024anchors |
arxiv-664826 | 2410.02087 | HyperBrain: Anomaly Detection for Temporal Hypergraph Brain Networks | <|reference_start|>HyperBrain: Anomaly Detection for Temporal Hypergraph Brain Networks: Identifying unusual brain activity is a crucial task in neuroscience research, as it aids in the early detection of brain disorders. It is common to represent brain networks as graphs, and researchers have developed various graph-based machine learning methods for analyzing them. However, the majority of existing graph learning tools for the brain face a combination of the following three key limitations. First, they focus only on pairwise correlations between regions of the brain, limiting their ability to capture synchronized activity among larger groups of regions. Second, they model the brain network as a static network, overlooking the temporal changes in the brain. Third, most are designed only for classifying brain networks as healthy or disordered, lacking the ability to identify abnormal brain activity patterns linked to biomarkers associated with disorders. To address these issues, we present HyperBrain, an unsupervised anomaly detection framework for temporal hypergraph brain networks. HyperBrain models fMRI time series data as temporal hypergraphs capturing dynamic higher-order interactions. It then uses a novel customized temporal walk (BrainWalk) and neural encodings to detect abnormal co-activations among brain regions. We evaluate the performance of HyperBrain in both synthetic and real-world settings for Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder(ADHD). HyperBrain outperforms all other baselines on detecting abnormal co-activations in brain networks. Furthermore, results obtained from HyperBrain are consistent with clinical research on these brain disorders. Our findings suggest that learning temporal and higher-order connections in the brain provides a promising approach to uncover intricate connectivity patterns in brain networks, offering improved diagnosis.<|reference_end|> | arxiv | @article{sadeghian2024hyperbrain:,
title={HyperBrain: Anomaly Detection for Temporal Hypergraph Brain Networks},
author={Sadaf Sadeghian, Xiaoxiao Li, and Margo Seltzer},
journal={arXiv preprint arXiv:2410.02087},
year={2024},
archivePrefix={arXiv},
eprint={2410.02087},
primaryClass={cs.LG q-bio.NC}
} | sadeghian2024hyperbrain: |
arxiv-664827 | 2410.02088 | Universal Logical Quantum Photonic Neural Network Processor via Cavity-Assisted Interactions | <|reference_start|>Universal Logical Quantum Photonic Neural Network Processor via Cavity-Assisted Interactions: Encoding quantum information within bosonic modes offers a promising direction for hardware-efficient and fault-tolerant quantum information processing. However, achieving high-fidelity universal control over the bosonic degree of freedom using native photonic hardware remains a challenge. Here, we propose an architecture to prepare and perform logical quantum operations on arbitrary multimode multi-photon states using a quantum photonic neural network. Central to our approach is the optical nonlinearity, which is realized through strong light-matter interaction with a three-level Lambda atomic system. The dynamics of this interaction are confined to the single-mode subspace, enabling the construction of high-fidelity quantum gates. This nonlinearity functions as a photon-number selective phase gate, which facilitates the construction of a universal gate set and serves as the element-wise activation function in our neural network architecture. Through numerical simulations, we demonstrate the versatility of our approach by executing tasks that are key to logical quantum information processing. The network is able to deterministically prepare a wide array of multimode multi-photon states, including essential resource states. We also show that the architecture is capable of encoding and performing logical operations on bosonic error-correcting codes. Additionally, by adapting components of our architecture, error-correcting circuits can be built to protect bosonic codes. The proposed architecture paves the way for near-term quantum photonic processors that enable error-corrected quantum computation, and can be achieved using present-day integrated photonic hardware.<|reference_end|> | arxiv | @article{basani2024universal,
title={Universal Logical Quantum Photonic Neural Network Processor via
Cavity-Assisted Interactions},
author={Jasvith Raj Basani, Murphy Yuezhen Niu, Edo Waks},
journal={arXiv preprint arXiv:2410.02088},
year={2024},
archivePrefix={arXiv},
eprint={2410.02088},
primaryClass={quant-ph cs.ET physics.optics}
} | basani2024universal |
arxiv-664828 | 2410.02089 | RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning | <|reference_start|>RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning: Large language models (LLMs) deployed as agents solve user-specified tasks over multiple steps while keeping the required manual engagement to a minimum. Crucially, such LLMs need to ground their generations in any feedback obtained to reliably achieve desired outcomes. We propose an end-to-end reinforcement learning method for teaching models to leverage execution feedback in the realm of code synthesis, where state-of-the-art LLMs struggle to improve code iteratively compared to independent sampling. We benchmark on competitive programming tasks, where we achieve new start-of-the art results with both small (8B parameters) and large (70B) models while reducing the amount of samples required by an order of magnitude. Our analysis of inference-time behavior demonstrates that our method produces LLMs that effectively leverage automatic feedback over multiple steps.<|reference_end|> | arxiv | @article{gehring2024rlef:,
title={RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement
Learning},
author={Jonas Gehring, Kunhao Zheng, Jade Copet, Vegard Mella, Taco Cohen,
Gabriel Synnaeve},
journal={arXiv preprint arXiv:2410.02089},
year={2024},
archivePrefix={arXiv},
eprint={2410.02089},
primaryClass={cs.CL cs.AI}
} | gehring2024rlef: |
arxiv-664829 | 2410.02091 | The Impact of Generative AI on Collaborative Open-Source Software Development: Evidence from GitHub Copilot | <|reference_start|>The Impact of Generative AI on Collaborative Open-Source Software Development: Evidence from GitHub Copilot: Generative artificial intelligence (AI) has opened the possibility of automated content production, including coding in software development, which can significantly influence the participation and performance of software developers. To explore this impact, we investigate the role of GitHub Copilot, a generative AI pair programmer, on software development in open-source community, where multiple developers voluntarily collaborate on software projects. Using GitHub's dataset for open-source repositories and a generalized synthetic control method, we find that Copilot significantly enhances project-level productivity by 6.5%. Delving deeper, we dissect the key mechanisms driving this improvement. Our findings reveal a 5.5% increase in individual productivity and a 5.4% increase in participation. However, this is accompanied with a 41.6% increase in integration time, potentially due to higher coordination costs. Interestingly, we also observe the differential effects among developers. We discover that core developers achieve greater project-level productivity gains from using Copilot, benefiting more in terms of individual productivity and participation compared to peripheral developers, plausibly due to their deeper familiarity with software projects. We also find that the increase in project-level productivity is accompanied with no change in code quality. We conclude that AI pair programmers bring benefits to developers to automate and augment their code, but human developers' knowledge of software projects can enhance the benefits. In summary, our research underscores the role of AI pair programmers in impacting project-level productivity within the open-source community and suggests potential implications for the structure of open-source software projects.<|reference_end|> | arxiv | @article{song2024the,
title={The Impact of Generative AI on Collaborative Open-Source Software
Development: Evidence from GitHub Copilot},
author={Fangchen Song, Ashish Agarwal, Wen Wen},
journal={arXiv preprint arXiv:2410.02091},
year={2024},
archivePrefix={arXiv},
eprint={2410.02091},
primaryClass={cs.SE cs.AI cs.HC econ.GN q-fin.EC}
} | song2024the |
arxiv-664830 | 2410.02093 | First-order empirical interpolation method for real-time solution of parametric time-dependent nonlinear PDEs | <|reference_start|>First-order empirical interpolation method for real-time solution of parametric time-dependent nonlinear PDEs: We present a model reduction approach for the real-time solution of time-dependent nonlinear partial differential equations (PDEs) with parametric dependencies. The approach integrates several ingredients to develop efficient and accurate reduced-order models. Proper orthogonal decomposition is used to construct a reduced-basis (RB) space which provides a rapidly convergent approximation of the parametric solution manifold. The Galerkin projection is employed to reduce the dimensionality of the problem by projecting the weak formulation of the governing PDEs onto the RB space. A major challenge in model reduction for nonlinear PDEs is the efficient treatment of nonlinear terms, which we address by unifying the implementation of several hyperreduction methods. We introduce a first-order empirical interpolation method to approximate the nonlinear terms and recover the computational efficiency. We demonstrate the effectiveness of our methodology through its application to the Allen-Cahn equation, which models phase separation processes, and the Buckley-Leverett equation, which describes two-phase fluid flow in porous media. Numerical results highlight the accuracy, efficiency, and stability of the proposed approach.<|reference_end|> | arxiv | @article{nguyen2024first-order,
title={First-order empirical interpolation method for real-time solution of
parametric time-dependent nonlinear PDEs},
author={Ngoc Cuong Nguyen},
journal={arXiv preprint arXiv:2410.02093},
year={2024},
archivePrefix={arXiv},
eprint={2410.02093},
primaryClass={math.NA cs.NA math.AP}
} | nguyen2024first-order |
arxiv-664831 | 2410.02094 | Tracking objects that change in appearance with phase synchrony | <|reference_start|>Tracking objects that change in appearance with phase synchrony: Objects we encounter often change appearance as we interact with them. Changes in illumination (shadows), object pose, or movement of nonrigid objects can drastically alter available image features. How do biological visual systems track objects as they change? It may involve specific attentional mechanisms for reasoning about the locations of objects independently of their appearances -- a capability that prominent neuroscientific theories have associated with computing through neural synchrony. We computationally test the hypothesis that the implementation of visual attention through neural synchrony underlies the ability of biological visual systems to track objects that change in appearance over time. We first introduce a novel deep learning circuit that can learn to precisely control attention to features separately from their location in the world through neural synchrony: the complex-valued recurrent neural network (CV-RNN). Next, we compare object tracking in humans, the CV-RNN, and other deep neural networks (DNNs), using FeatureTracker: a large-scale challenge that asks observers to track objects as their locations and appearances change in precisely controlled ways. While humans effortlessly solved FeatureTracker, state-of-the-art DNNs did not. In contrast, our CV-RNN behaved similarly to humans on the challenge, providing a computational proof-of-concept for the role of phase synchronization as a neural substrate for tracking appearance-morphing objects as they move about.<|reference_end|> | arxiv | @article{muzellec2024tracking,
title={Tracking objects that change in appearance with phase synchrony},
author={Sabine Muzellec, Drew Linsley, Alekh K. Ashok, Ennio Mingolla, Girik
Malik, Rufin VanRullen, Thomas Serre},
journal={arXiv preprint arXiv:2410.02094},
year={2024},
archivePrefix={arXiv},
eprint={2410.02094},
primaryClass={cs.AI cs.CV q-bio.NC}
} | muzellec2024tracking |
arxiv-664832 | 2410.02095 | DomainLynx: Leveraging Large Language Models for Enhanced Domain Squatting Detection | <|reference_start|>DomainLynx: Leveraging Large Language Models for Enhanced Domain Squatting Detection: Domain squatting poses a significant threat to Internet security, with attackers employing increasingly sophisticated techniques. This study introduces DomainLynx, an innovative compound AI system leveraging Large Language Models (LLMs) for enhanced domain squatting detection. Unlike existing methods focusing on predefined patterns for top-ranked domains, DomainLynx excels in identifying novel squatting techniques and protecting less prominent brands. The system's architecture integrates advanced data processing, intelligent domain pairing, and LLM-powered threat assessment. Crucially, DomainLynx incorporates specialized components that mitigate LLM hallucinations, ensuring reliable and context-aware detection. This approach enables efficient analysis of vast security data from diverse sources, including Certificate Transparency logs, Passive DNS records, and zone files. Evaluated on a curated dataset of 1,649 squatting domains, DomainLynx achieved 94.7\% accuracy using Llama-3-70B. In a month-long real-world test, it detected 34,359 squatting domains from 2.09 million new domains, outperforming baseline methods by 2.5 times. This research advances Internet security by providing a versatile, accurate, and adaptable tool for combating evolving domain squatting threats. DomainLynx's approach paves the way for more robust, AI-driven cybersecurity solutions, enhancing protection for a broader range of online entities and contributing to a safer digital ecosystem.<|reference_end|> | arxiv | @article{chiba2024domainlynx:,
title={DomainLynx: Leveraging Large Language Models for Enhanced Domain
Squatting Detection},
author={Daiki Chiba, Hiroki Nakano, Takashi Koide},
journal={arXiv preprint arXiv:2410.02095},
year={2024},
archivePrefix={arXiv},
eprint={2410.02095},
primaryClass={cs.CR}
} | chiba2024domainlynx: |
arxiv-664833 | 2410.02096 | DomainDynamics: Lifecycle-Aware Risk Timeline Construction for Domain Names | <|reference_start|>DomainDynamics: Lifecycle-Aware Risk Timeline Construction for Domain Names: The persistent threat posed by malicious domain names in cyber-attacks underscores the urgent need for effective detection mechanisms. Traditional machine learning methods, while capable of identifying such domains, often suffer from high false positive and false negative rates due to their extensive reliance on historical data. Conventional approaches often overlook the dynamic nature of domain names, the purposes and ownership of which may evolve, potentially rendering risk assessments outdated or irrelevant. To address these shortcomings, we introduce DomainDynamics, a novel system designed to predict domain name risks by considering their lifecycle stages. DomainDynamics constructs a timeline for each domain, evaluating the characteristics of each domain at various points in time to make informed, temporal risk determinations. In an evaluation experiment involving over 85,000 actual malicious domains from malware and phishing incidents, DomainDynamics demonstrated a significant improvement in detection rates, achieving an 82.58\% detection rate with a low false positive rate of 0.41\%. This performance surpasses that of previous studies and commercial services, improving detection capability substantially.<|reference_end|> | arxiv | @article{chiba2024domaindynamics:,
title={DomainDynamics: Lifecycle-Aware Risk Timeline Construction for Domain
Names},
author={Daiki Chiba, Hiroki Nakano, Takashi Koide},
journal={arXiv preprint arXiv:2410.02096},
year={2024},
archivePrefix={arXiv},
eprint={2410.02096},
primaryClass={cs.CR}
} | chiba2024domaindynamics: |
arxiv-664834 | 2410.02097 | DomainHarvester: Harvesting Infrequently Visited Yet Trustworthy Domain Names | <|reference_start|>DomainHarvester: Harvesting Infrequently Visited Yet Trustworthy Domain Names: In cybersecurity, allow lists play a crucial role in distinguishing safe websites from potential threats. Conventional methods for compiling allow lists, focusing heavily on website popularity, often overlook infrequently visited legitimate domains. This paper introduces DomainHarvester, a system aimed at generating allow lists that include trustworthy yet infrequently visited domains. By adopting an innovative bottom-up methodology that leverages the web's hyperlink structure, DomainHarvester identifies legitimate yet underrepresented domains. The system uses seed URLs to gather domain names, employing machine learning with a Transformer-based approach to assess their trustworthiness. DomainHarvester has developed two distinct allow lists: one with a global focus and another emphasizing local relevance. Compared to six existing top lists, DomainHarvester's allow lists show minimal overlaps, 4\% globally and 0.1\% locally, while significantly reducing the risk of including malicious domains, thereby enhancing security. The contributions of this research are substantial, illuminating the overlooked aspect of trustworthy yet underrepresented domains and introducing DomainHarvester, a system that goes beyond traditional popularity-based metrics. Our methodology enhances the inclusivity and precision of allow lists, offering significant advantages to users and businesses worldwide, especially in non-English speaking regions.<|reference_end|> | arxiv | @article{chiba2024domainharvester:,
title={DomainHarvester: Harvesting Infrequently Visited Yet Trustworthy Domain
Names},
author={Daiki Chiba, Hiroki Nakano, Takashi Koide},
journal={arXiv preprint arXiv:2410.02097},
year={2024},
archivePrefix={arXiv},
eprint={2410.02097},
primaryClass={cs.CR}
} | chiba2024domainharvester: |
arxiv-664835 | 2410.02098 | EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing | <|reference_start|>EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing: Diffusion transformers have been widely adopted for text-to-image synthesis. While scaling these models up to billions of parameters shows promise, the effectiveness of scaling beyond current sizes remains underexplored and challenging. By explicitly exploiting the computational heterogeneity of image generations, we develop a new family of Mixture-of-Experts (MoE) models (EC-DIT) for diffusion transformers with expert-choice routing. EC-DIT learns to adaptively optimize the compute allocated to understand the input texts and generate the respective image patches, enabling heterogeneous computation aligned with varying text-image complexities. This heterogeneity provides an efficient way of scaling EC-DIT up to 97 billion parameters and achieving significant improvements in training convergence, text-to-image alignment, and overall generation quality over dense models and conventional MoE models. Through extensive ablations, we show that EC-DIT demonstrates superior scalability and adaptive compute allocation by recognizing varying textual importance through end-to-end training. Notably, in text-to-image alignment evaluation, our largest models achieve a state-of-the-art GenEval score of 71.68% and still maintain competitive inference speed with intuitive interpretability.<|reference_end|> | arxiv | @article{sun2024ec-dit:,
title={EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice
Routing},
author={Haotian Sun, Tao Lei, Bowen Zhang, Yanghao Li, Haoshuo Huang, Ruoming
Pang, Bo Dai, Nan Du},
journal={arXiv preprint arXiv:2410.02098},
year={2024},
archivePrefix={arXiv},
eprint={2410.02098},
primaryClass={cs.CV cs.LG}
} | sun2024ec-dit: |
arxiv-664836 | 2410.02099 | A Watermark for Black-Box Language Models | <|reference_start|>A Watermark for Black-Box Language Models: Watermarking has recently emerged as an effective strategy for detecting the outputs of large language models (LLMs). Most existing schemes require \emph{white-box} access to the model's next-token probability distribution, which is typically not accessible to downstream users of an LLM API. In this work, we propose a principled watermarking scheme that requires only the ability to sample sequences from the LLM (i.e. \emph{black-box} access), boasts a \emph{distortion-free} property, and can be chained or nested using multiple secret keys. We provide performance guarantees, demonstrate how it can be leveraged when white-box access is available, and show when it can outperform existing white-box schemes via comprehensive experiments.<|reference_end|> | arxiv | @article{bahri2024a,
title={A Watermark for Black-Box Language Models},
author={Dara Bahri, John Wieting, Dana Alon, Donald Metzler},
journal={arXiv preprint arXiv:2410.02099},
year={2024},
archivePrefix={arXiv},
eprint={2410.02099},
primaryClass={cs.CR cs.CL cs.LG}
} | bahri2024a |
arxiv-664837 | 2410.02100 | High-order empirical interpolation methods for real time solution of parametrized nonlinear PDEs | <|reference_start|>High-order empirical interpolation methods for real time solution of parametrized nonlinear PDEs: We present novel model reduction methods for rapid solution of parametrized nonlinear partial differential equations (PDEs) in real-time or many-query contexts. Our approach combines reduced basis (RB) space for rapidly convergent approximation of the parametric solution manifold, Galerkin projection of the underlying PDEs onto the RB space for dimensionality reduction, and high-order empirical interpolation for efficient treatment of the nonlinear terms. We propose a class of high-order empirical interpolation methods to derive basis functions and interpolation points by using high-order partial derivatives of the nonlinear terms. As these methods can generate high-quality basis functions and interpolation points from a snapshot set of full-order model (FOM) solutions, they significantly improve the approximation accuracy. We develop effective a posteriori estimator to quantify the interpolation errors and construct a parameter sample via greedy sampling. Furthermore, we implement two hyperreduction schemes to construct efficient reduced-order models: one that applies the empirical interpolation before Newton's method and another after. The latter scheme shows flexibility in controlling hyperreduction errors. Numerical results are presented to demonstrate the accuracy and efficiency of the proposed methods.<|reference_end|> | arxiv | @article{nguyen2024high-order,
title={High-order empirical interpolation methods for real time solution of
parametrized nonlinear PDEs},
author={Ngoc Cuong Nguyen},
journal={arXiv preprint arXiv:2410.02100},
year={2024},
archivePrefix={arXiv},
eprint={2410.02100},
primaryClass={math.NA cs.NA math.AP}
} | nguyen2024high-order |
arxiv-664838 | 2410.02101 | Orient Anything | <|reference_start|>Orient Anything: Orientation estimation is a fundamental task in 3D shape analysis which consists of estimating a shape's orientation axes: its side-, up-, and front-axes. Using this data, one can rotate a shape into canonical orientation, where its orientation axes are aligned with the coordinate axes. Developing an orientation algorithm that reliably estimates complete orientations of general shapes remains an open problem. We introduce a two-stage orientation pipeline that achieves state of the art performance on up-axis estimation and further demonstrate its efficacy on full-orientation estimation, where one seeks all three orientation axes. Unlike previous work, we train and evaluate our method on all of Shapenet rather than a subset of classes. We motivate our engineering contributions by theory describing fundamental obstacles to orientation estimation for rotationally-symmetric shapes, and show how our method avoids these obstacles.<|reference_end|> | arxiv | @article{scarvelis2024orient,
title={Orient Anything},
author={Christopher Scarvelis, David Benhaim, Paul Zhang},
journal={arXiv preprint arXiv:2410.02101},
year={2024},
archivePrefix={arXiv},
eprint={2410.02101},
primaryClass={cs.CV cs.LG}
} | scarvelis2024orient |
arxiv-664839 | 2410.02102 | Racing Thoughts: Explaining Large Language Model Contextualization Errors | <|reference_start|>Racing Thoughts: Explaining Large Language Model Contextualization Errors: The profound success of transformer-based language models can largely be attributed to their ability to integrate relevant contextual information from an input sequence in order to generate a response or complete a task. However, we know very little about the algorithms that a model employs to implement this capability, nor do we understand their failure modes. For example, given the prompt "John is going fishing, so he walks over to the bank. Can he make an ATM transaction?", a model may incorrectly respond "Yes" if it has not properly contextualized "bank" as a geographical feature, rather than a financial institution. We propose the LLM Race Conditions Hypothesis as an explanation of contextualization errors of this form. This hypothesis identifies dependencies between tokens (e.g., "bank" must be properly contextualized before the final token, "?", integrates information from "bank"), and claims that contextualization errors are a result of violating these dependencies. Using a variety of techniques from mechanistic intepretability, we provide correlational and causal evidence in support of the hypothesis, and suggest inference-time interventions to address it.<|reference_end|> | arxiv | @article{lepori2024racing,
title={Racing Thoughts: Explaining Large Language Model Contextualization
Errors},
author={Michael A. Lepori, Michael Mozer, Asma Ghandeharioun},
journal={arXiv preprint arXiv:2410.02102},
year={2024},
archivePrefix={arXiv},
eprint={2410.02102},
primaryClass={cs.CL}
} | lepori2024racing |
arxiv-664840 | 2410.02103 | MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis | <|reference_start|>MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis: Recent works in volume rendering, \textit{e.g.} NeRF and 3D Gaussian Splatting (3DGS), significantly advance the rendering quality and efficiency with the help of the learned implicit neural radiance field or 3D Gaussians. Rendering on top of an explicit representation, the vanilla 3DGS and its variants deliver real-time efficiency by optimizing the parametric model with single-view supervision per iteration during training which is adopted from NeRF. Consequently, certain views are overfitted, leading to unsatisfying appearance in novel-view synthesis and imprecise 3D geometries. To solve aforementioned problems, we propose a new 3DGS optimization method embodying four key novel contributions: 1) We transform the conventional single-view training paradigm into a multi-view training strategy. With our proposed multi-view regulation, 3D Gaussian attributes are further optimized without overfitting certain training views. As a general solution, we improve the overall accuracy in a variety of scenarios and different Gaussian variants. 2) Inspired by the benefit introduced by additional views, we further propose a cross-intrinsic guidance scheme, leading to a coarse-to-fine training procedure concerning different resolutions. 3) Built on top of our multi-view regulated training, we further propose a cross-ray densification strategy, densifying more Gaussian kernels in the ray-intersect regions from a selection of views. 4) By further investigating the densification strategy, we found that the effect of densification should be enhanced when certain views are distinct dramatically. As a solution, we propose a novel multi-view augmented densification strategy, where 3D Gaussians are encouraged to get densified to a sufficient number accordingly, resulting in improved reconstruction accuracy.<|reference_end|> | arxiv | @article{du2024mvgs:,
title={MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis},
author={Xiaobiao Du, Yida Wang, Xin Yu},
journal={arXiv preprint arXiv:2410.02103},
year={2024},
archivePrefix={arXiv},
eprint={2410.02103},
primaryClass={cs.CV}
} | du2024mvgs: |
arxiv-664841 | 2410.02106 | Safe Navigation in Unmapped Environments for Robotic Systems with Input Constraints | <|reference_start|>Safe Navigation in Unmapped Environments for Robotic Systems with Input Constraints: This paper presents an approach for navigation and control in unmapped environments under input and state constraints using a composite control barrier function (CBF). We consider the scenario where real-time perception feedback (e.g., LiDAR) is used online to construct a local CBF that models local state constraints (e.g., local safety constraints such as obstacles) in the a priori unmapped environment. The approach employs a soft-maximum function to synthesize a single time-varying CBF from the N most recently obtained local CBFs. Next, the input constraints are transformed into controller-state constraints through the use of control dynamics. Then, we use a soft-minimum function to compose the input constraints with the time-varying CBF that models the a priori unmapped environment. This composition yields a single relaxed CBF, which is used in a constrained optimization to obtain an optimal control that satisfies the state and input constraints. The approach is validated through simulations of a nonholonomic ground robot that is equipped with LiDAR and navigates an unmapped environment. The robot successfully navigates the environment while avoiding the a priori unmapped obstacles and satisfying both speed and input constraints.<|reference_end|> | arxiv | @article{safari2024safe,
title={Safe Navigation in Unmapped Environments for Robotic Systems with Input
Constraints},
author={Amirsaeid Safari and Jesse B. Hoagg},
journal={arXiv preprint arXiv:2410.02106},
year={2024},
archivePrefix={arXiv},
eprint={2410.02106},
primaryClass={cs.RO cs.SY eess.SY}
} | safari2024safe |
arxiv-664842 | 2410.02107 | Safety Verification of Stochastic Systems: A Set-Erosion Approach | <|reference_start|>Safety Verification of Stochastic Systems: A Set-Erosion Approach: We study the safety verification problem for discrete-time stochastic systems. We propose an approach for safety verification termed set-erosion strategy that verifies the safety of a stochastic system on a safe set through the safety of its associated deterministic system on an eroded subset. The amount of erosion is captured by the probabilistic bound on the distance between stochastic trajectories and their associated deterministic counterpart. Building on our recent work [1], we establish a sharp probabilistic bound on this distance. Combining this bound with the set-erosion strategy, we establish a general framework for the safety verification of stochastic systems. Our method is flexible and can work effectively with any deterministic safety verification techniques. We exemplify our method by incorporating barrier functions designed for deterministic safety verification, obtaining barrier certificates much tighter than existing results. Numerical experiments are conducted to demonstrate the efficacy and superiority of our method.<|reference_end|> | arxiv | @article{liu2024safety,
title={Safety Verification of Stochastic Systems: A Set-Erosion Approach},
author={Zishun Liu, Saber Jafarpour and Yongxin Chen},
journal={arXiv preprint arXiv:2410.02107},
year={2024},
archivePrefix={arXiv},
eprint={2410.02107},
primaryClass={eess.SY cs.SY}
} | liu2024safety |
arxiv-664843 | 2410.02108 | ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement | <|reference_start|>ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement: Post-training Large Language Models (LLMs) with explicit reasoning trajectories can enhance their reasoning abilities. However, acquiring such high-quality trajectory data typically demands meticulous supervision from humans or superior models, which can be either expensive or license-constrained. In this paper, we explore how far an LLM can improve its reasoning by self-synthesizing reasoning paths as training data without any additional supervision. Existing self-synthesizing methods, such as STaR, suffer from poor generalization to out-of-domain (OOD) reasoning tasks. We hypothesize it is due to that their self-synthesized reasoning paths are too task-specific, lacking general task-agnostic reasoning guidance. To address this, we propose Reasoning Generalist via Self-Improvement (ReGenesis), a method to self-synthesize reasoning paths as post-training data by progressing from abstract to concrete. More specifically, ReGenesis self-synthesizes reasoning paths by converting general reasoning guidelines into task-specific ones, generating reasoning structures, and subsequently transforming these structures into reasoning paths, without the need for human-designed task-specific examples used in existing methods. We show that ReGenesis achieves superior performance on all in-domain and OOD settings tested compared to existing methods. For six OOD tasks specifically, while previous methods exhibited an average performance decrease of approximately 4.6% after post training, ReGenesis delivers around 6.1% performance improvement. We also conduct in-depth analysis of our framework and show ReGenesis is effective across various LLMs and design choices.<|reference_end|> | arxiv | @article{peng2024regenesis:,
title={ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement},
author={Xiangyu Peng, Congying Xia, Xinyi Yang, Caiming Xiong, Chien-Sheng Wu,
Chen Xing},
journal={arXiv preprint arXiv:2410.02108},
year={2024},
archivePrefix={arXiv},
eprint={2410.02108},
primaryClass={cs.CL}
} | peng2024regenesis: |
arxiv-664844 | 2410.02110 | Can LLMs Reliably Simulate Human Learner Actions? A Simulation Authoring Framework for Open-Ended Learning Environments | <|reference_start|>Can LLMs Reliably Simulate Human Learner Actions? A Simulation Authoring Framework for Open-Ended Learning Environments: Simulating learner actions helps stress-test open-ended interactive learning environments and prototype new adaptations before deployment. While recent studies show the promise of using large language models (LLMs) for simulating human behavior, such approaches have not gone beyond rudimentary proof-of-concept stages due to key limitations. First, LLMs are highly sensitive to minor prompt variations, raising doubts about their ability to generalize to new scenarios without extensive prompt engineering. Moreover, apparently successful outcomes can often be unreliable, either because domain experts unintentionally guide LLMs to produce expected results, leading to self-fulfilling prophecies; or because the LLM has encountered highly similar scenarios in its training data, meaning that models may not be simulating behavior so much as regurgitating memorized content. To address these challenges, we propose Hyp-Mix, a simulation authoring framework that allows experts to develop and evaluate simulations by combining testable hypotheses about learner behavior. Testing this framework in a physics learning environment, we found that GPT-4 Turbo maintains calibrated behavior even as the underlying learner model changes, providing the first evidence that LLMs can be used to simulate realistic behaviors in open-ended interactive learning environments, a necessary prerequisite for useful LLM behavioral simulation.<|reference_end|> | arxiv | @article{mannekote2024can,
title={Can LLMs Reliably Simulate Human Learner Actions? A Simulation Authoring
Framework for Open-Ended Learning Environments},
author={Amogh Mannekote, Adam Davies, Jina Kang, Kristy Elizabeth Boyer},
journal={arXiv preprint arXiv:2410.02110},
year={2024},
archivePrefix={arXiv},
eprint={2410.02110},
primaryClass={cs.AI cs.CL cs.LG}
} | mannekote2024can |
arxiv-664845 | 2410.02113 | Mamba Neural Operator: Who Wins? Transformers vs State-Space Models for PDEs | <|reference_start|>Mamba Neural Operator: Who Wins? Transformers vs State-Space Models for PDEs: Partial differential equations (PDEs) are widely used to model complex physical systems, but solving them efficiently remains a significant challenge. Recently, Transformers have emerged as the preferred architecture for PDEs due to their ability to capture intricate dependencies. However, they struggle with representing continuous dynamics and long-range interactions. To overcome these limitations, we introduce the Mamba Neural Operator (MNO), a novel framework that enhances neural operator-based techniques for solving PDEs. MNO establishes a formal theoretical connection between structured state-space models (SSMs) and neural operators, offering a unified structure that can adapt to diverse architectures, including Transformer-based models. By leveraging the structured design of SSMs, MNO captures long-range dependencies and continuous dynamics more effectively than traditional Transformers. Through extensive analysis, we show that MNO significantly boosts the expressive power and accuracy of neural operators, making it not just a complement but a superior framework for PDE-related tasks, bridging the gap between efficient representation and accurate solution approximation.<|reference_end|> | arxiv | @article{cheng2024mamba,
title={Mamba Neural Operator: Who Wins? Transformers vs. State-Space Models for
PDEs},
author={Chun-Wun Cheng, Jiahao Huang, Yi Zhang, Guang Yang, Carola-Bibiane
Sch"onlieb, Angelica I Aviles-Rivero},
journal={arXiv preprint arXiv:2410.02113},
year={2024},
archivePrefix={arXiv},
eprint={2410.02113},
primaryClass={cs.LG cs.NA math.NA}
} | cheng2024mamba |
arxiv-664846 | 2410.02114 | Iterated Radical Expansions and Convergence | <|reference_start|>Iterated Radical Expansions and Convergence: We treat three recurrences involving square roots, the first of which arises from an infinite simple radical expansion for the Golden mean, whose precise convergence rate was made famous by Richard Bruce Paris in 1987. A never-before-seen proof of an important formula is given. The other recurrences are non-exponential yet equally interesting. Asymptotic series developed for each of these two examples feature a constant, dependent on the initial condition but otherwise intrinsic to the function at hand.<|reference_end|> | arxiv | @article{finch2024iterated,
title={Iterated Radical Expansions and Convergence},
author={Steven Finch},
journal={arXiv preprint arXiv:2410.02114},
year={2024},
archivePrefix={arXiv},
eprint={2410.02114},
primaryClass={math.NT cs.DM math.CO}
} | finch2024iterated |
arxiv-664847 | 2410.02115 | L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding? | <|reference_start|>L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?: Long-context models (LCMs) have made remarkable strides in recent years, offering users great convenience for handling tasks that involve long context, such as document summarization. As the community increasingly prioritizes the faithfulness of generated results, merely ensuring the accuracy of LCM outputs is insufficient, as it is quite challenging for humans to verify the results from the extremely lengthy context. Yet, although some efforts have been made to assess whether LCMs respond truly based on the context, these works either are limited to specific tasks or heavily rely on external evaluation resources like GPT4.In this work, we introduce L-CiteEval, a comprehensive multi-task benchmark for long-context understanding with citations, aiming to evaluate both the understanding capability and faithfulness of LCMs. L-CiteEval covers 11 tasks from diverse domains, spanning context lengths from 8K to 48K, and provides a fully automated evaluation suite. Through testing with 11 cutting-edge closed-source and open-source LCMs, we find that although these models show minor differences in their generated results, open-source models substantially trail behind their closed-source counterparts in terms of citation accuracy and recall. This suggests that current open-source LCMs are prone to responding based on their inherent knowledge rather than the given context, posing a significant risk to the user experience in practical applications. We also evaluate the RAG approach and observe that RAG can significantly improve the faithfulness of LCMs, albeit with a slight decrease in the generation quality. Furthermore, we discover a correlation between the attention mechanisms of LCMs and the citation generation process.<|reference_end|> | arxiv | @article{tang2024l-citeeval:,
title={L-CiteEval: Do Long-Context Models Truly Leverage Context for
Responding?},
author={Zecheng Tang, Keyan Zhou, Juntao Li, Baibei Ji, Jianye Hou, Min Zhang},
journal={arXiv preprint arXiv:2410.02115},
year={2024},
archivePrefix={arXiv},
eprint={2410.02115},
primaryClass={cs.CL}
} | tang2024l-citeeval: |
arxiv-664848 | 2410.02116 | Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks | <|reference_start|>Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks: Dataset distillation (DD) generates small synthetic datasets that can efficiently train deep networks with a limited amount of memory and compute. Despite the success of DD methods for supervised learning, DD for self-supervised pre-training of deep models has remained unaddressed. Pre-training on unlabeled data is crucial for efficiently generalizing to downstream tasks with limited labeled data. In this work, we propose the first effective DD method for SSL pre-training. First, we show, theoretically and empirically, that naive application of supervised DD methods to SSL fails, due to the high variance of the SSL gradient. Then, we address this issue by relying on insights from knowledge distillation (KD) literature. Specifically, we train a small student model to match the representations of a larger teacher model trained with SSL. Then, we generate a small synthetic dataset by matching the training trajectories of the student models. As the KD objective has considerably lower variance than SSL, our approach can generate synthetic datasets that can successfully pre-train high-quality encoders. Through extensive experiments, we show that our distilled sets lead to up to 13% higher accuracy than prior work, on a variety of downstream tasks, in the presence of limited labeled data.<|reference_end|> | arxiv | @article{joshi2024dataset,
title={Dataset Distillation via Knowledge Distillation: Towards Efficient
Self-Supervised Pre-Training of Deep Networks},
author={Siddharth Joshi, Jiayi Ni and Baharan Mirzasoleiman},
journal={arXiv preprint arXiv:2410.02116},
year={2024},
archivePrefix={arXiv},
eprint={2410.02116},
primaryClass={cs.LG}
} | joshi2024dataset |
arxiv-664849 | 2410.02117 | Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices | <|reference_start|>Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices: Dense linear layers are the dominant computational bottleneck in large neural networks, presenting a critical need for more efficient alternatives. Previous efforts focused on a small number of hand-crafted structured matrices and neglected to investigate whether these structures can surpass dense layers in terms of compute-optimal scaling laws when both the model size and training examples are optimally allocated. In this work, we present a unifying framework that enables searching among all linear operators expressible via an Einstein summation. This framework encompasses many previously proposed structures, such as low-rank, Kronecker, Tensor-Train, Block Tensor-Train (BTT), and Monarch, along with many novel structures. To analyze the framework, we develop a taxonomy of all such operators based on their computational and algebraic properties and show that differences in the compute-optimal scaling laws are mostly governed by a small number of variables that we introduce. Namely, a small $\omega$ (which measures parameter sharing) and large $\psi$ (which measures the rank) reliably led to better scaling laws. Guided by the insight that full-rank structures that maximize parameters per unit of compute perform the best, we propose BTT-MoE, a novel Mixture-of-Experts (MoE) architecture obtained by sparsifying computation in the BTT structure. In contrast to the standard sparse MoE for each entire feed-forward network, BTT-MoE learns an MoE in every single linear layer of the model, including the projection matrices in the attention blocks. We find BTT-MoE provides a substantial compute-efficiency gain over dense layers and standard MoE.<|reference_end|> | arxiv | @article{potapczynski2024searching,
title={Searching for Efficient Linear Layers over a Continuous Space of
Structured Matrices},
author={Andres Potapczynski, Shikai Qiu, Marc Finzi, Christopher Ferri, Zixi
Chen, Micah Goldblum, Bayan Bruss, Christopher De Sa, Andrew Gordon Wilson},
journal={arXiv preprint arXiv:2410.02117},
year={2024},
archivePrefix={arXiv},
eprint={2410.02117},
primaryClass={cs.LG stat.ML}
} | potapczynski2024searching |
arxiv-664850 | 2410.02118 | A Comprehensive Review of Propagation Models in Complex Networks: From Deterministic to Deep Learning Approaches | <|reference_start|>A Comprehensive Review of Propagation Models in Complex Networks: From Deterministic to Deep Learning Approaches: Understanding propagation mechanisms in complex networks is essential for fields like epidemiology and multi-robot networks. This paper reviews various propagation models, from traditional deterministic frameworks to advanced data-driven and deep learning approaches. We differentiate between static and dynamic networks, noting that static models provide foundational insights, while dynamic models capture real-world temporal changes. Deterministic models like the SIR framework offer clear mathematical insights but often lack adaptability to randomness, whereas stochastic models enhance realism at the cost of interpretability. Behavior-based models focus on individual decision-making, demanding more computational resources. Data-driven approaches improve accuracy in nonlinear scenarios by adapting to evolving networks, using either traditional models or model-free machine learning techniques. We explore supervised and unsupervised learning methods, as well as reinforcement learning, which operates without predefined datasets. The application of graph neural networks (GNNs) is also discussed, highlighting their effectiveness in modeling propagation in complex networks. The paper underscores key applications and challenges associated with each model type, emphasizing the increasing importance of hybrid and machine learning-based solutions in contemporary network propagation issues.<|reference_end|> | arxiv | @article{wu2024a,
title={A Comprehensive Review of Propagation Models in Complex Networks: From
Deterministic to Deep Learning Approaches},
author={Bin Wu, Sifu Luo and C. Steve Suh},
journal={arXiv preprint arXiv:2410.02118},
year={2024},
archivePrefix={arXiv},
eprint={2410.02118},
primaryClass={cs.SI}
} | wu2024a |
arxiv-664851 | 2410.02120 | Lossy Cooperative UAV Relaying Networks: Outage Probability Analysis and Location Optimization | <|reference_start|>Lossy Cooperative UAV Relaying Networks: Outage Probability Analysis and Location Optimization: In this paper, performance of a lossy cooperative unmanned aerial vehicle (UAV) relay communication system is analyzed. In this system, the UAV relay adopts lossy forward (LF) strategy and the receiver has certain distortion requirements for the received information. For the system described above, we first derive the achievable rate distortion region of the system. Then, on the basis of the region analysis, the system outage probability when the channel suffers Nakagami-$m$ fading is analyzed. Finally, we design an optimal relay position identification algorithm based on the Soft Actor-Critic (SAC) algorithm, which determines the optimal UAV position to minimize the outage probability. The simulation results show that the proposed algorithm can optimize the UAV position and reduce the system outage probability effectively.<|reference_end|> | arxiv | @article{lian2024lossy,
title={Lossy Cooperative UAV Relaying Networks: Outage Probability Analysis and
Location Optimization},
author={Ya Lian, Wensheng Lin, Lixin Li, Fucheng Yang, Zhu Han, and Tad
Matsumoto},
journal={arXiv preprint arXiv:2410.02120},
year={2024},
archivePrefix={arXiv},
eprint={2410.02120},
primaryClass={cs.NI cs.LG cs.SY eess.SY}
} | lian2024lossy |
arxiv-664852 | 2410.02121 | SC-CDM: Enhancing Quality of Image Semantic Communication with a Compact Diffusion Model | <|reference_start|>SC-CDM: Enhancing Quality of Image Semantic Communication with a Compact Diffusion Model: Semantic Communication (SC) is an emerging technology that has attracted much attention in the sixth-generation (6G) mobile communication systems. However, few literature has fully considered the perceptual quality of the reconstructed image. To solve this problem, we propose a generative SC for wireless image transmission (denoted as SC-CDM). This approach leverages compact diffusion models to improve the fidelity and semantic accuracy of the images reconstructed after transmission, ensuring that the essential content is preserved even in bandwidth-constrained environments. Specifically, we aim to redesign the swin Transformer as a new backbone for efficient semantic feature extraction and compression. Next, the receiver integrates the slim prior and image reconstruction networks. Compared to traditional Diffusion Models (DMs), it leverages DMs' robust distribution mapping capability to generate a compact condition vector, guiding image recovery, thus enhancing the perceptual details of the reconstructed images. Finally, a series of evaluation and ablation studies are conducted to validate the effectiveness and robustness of the proposed algorithm and further increase the Peak Signal-to-Noise Ratio (PSNR) by over 17% on top of CNN-based DeepJSCC.<|reference_end|> | arxiv | @article{zhang2024sc-cdm:,
title={SC-CDM: Enhancing Quality of Image Semantic Communication with a Compact
Diffusion Model},
author={Kexin Zhang, Lixin Li, Wensheng Lin, Yuna Yan, Wenchi Cheng and Zhu
Han},
journal={arXiv preprint arXiv:2410.02121},
year={2024},
archivePrefix={arXiv},
eprint={2410.02121},
primaryClass={eess.IV cs.LG cs.NI}
} | zhang2024sc-cdm: |
arxiv-664853 | 2410.02122 | Resource Allocation Based on Optimal Transport Theory in ISAC-Enabled Multi-UAV Networks | <|reference_start|>Resource Allocation Based on Optimal Transport Theory in ISAC-Enabled Multi-UAV Networks: This paper investigates the resource allocation optimization for cooperative communication with non-cooperative localization in integrated sensing and communications (ISAC)-enabled multi-unmanned aerial vehicle (UAV) cooperative networks. Our goal is to maximize the weighted sum of the system's average sum rate and the localization quality of service (QoS) by jointly optimizing cell association, communication power allocation, and sensing power allocation. Since the formulated problem is a mixed-integer nonconvex problem, we propose the alternating iteration algorithm based on optimal transport theory (AIBOT) to solve the optimization problem more effectively. Simulation results demonstrate that the AIBOT can improve the system sum rate by nearly 12% and reduce the localization Cr'amer-Rao bound (CRB) by almost 29% compared to benchmark algorithms.<|reference_end|> | arxiv | @article{zheng2024resource,
title={Resource Allocation Based on Optimal Transport Theory in ISAC-Enabled
Multi-UAV Networks},
author={Yufeng Zheng, Lixin Li, Wensheng Lin, Wei Liang, Qinghe Du and Zhu Han},
journal={arXiv preprint arXiv:2410.02122},
year={2024},
archivePrefix={arXiv},
eprint={2410.02122},
primaryClass={cs.NI cs.SY eess.SY}
} | zheng2024resource |
arxiv-664854 | 2410.02126 | BayesCNS: A Unified Bayesian Approach to Address Cold Start and Non-Stationarity in Search Systems at Scale | <|reference_start|>BayesCNS: A Unified Bayesian Approach to Address Cold Start and Non-Stationarity in Search Systems at Scale: Information Retrieval (IR) systems used in search and recommendation platforms frequently employ Learning-to-Rank (LTR) models to rank items in response to user queries. These models heavily rely on features derived from user interactions, such as clicks and engagement data. This dependence introduces cold start issues for items lacking user engagement and poses challenges in adapting to non-stationary shifts in user behavior over time. We address both challenges holistically as an online learning problem and propose BayesCNS, a Bayesian approach designed to handle cold start and non-stationary distribution shifts in search systems at scale. BayesCNS achieves this by estimating prior distributions for user-item interactions, which are continuously updated with new user interactions gathered online. This online learning procedure is guided by a ranker model, enabling efficient exploration of relevant items using contextual information provided by the ranker. We successfully deployed BayesCNS in a large-scale search system and demonstrated its efficacy through comprehensive offline and online experiments. Notably, an online A/B experiment showed a 10.60% increase in new item interactions and a 1.05% improvement in overall success metrics over the existing production baseline.<|reference_end|> | arxiv | @article{ardywibowo2024bayescns:,
title={BayesCNS: A Unified Bayesian Approach to Address Cold Start and
Non-Stationarity in Search Systems at Scale},
author={Randy Ardywibowo, Rakesh Sunki, Lucy Kuo, Sankalp Nayak},
journal={arXiv preprint arXiv:2410.02126},
year={2024},
archivePrefix={arXiv},
eprint={2410.02126},
primaryClass={cs.IR cs.LG}
} | ardywibowo2024bayescns: |
arxiv-664855 | 2410.02128 | Breaking the mold: The challenge of large scale MARL specialization | <|reference_start|>Breaking the mold: The challenge of large scale MARL specialization: In multi-agent learning, the predominant approach focuses on generalization, often neglecting the optimization of individual agents. This emphasis on generalization limits the ability of agents to utilize their unique strengths, resulting in inefficiencies. This paper introduces Comparative Advantage Maximization (CAM), a method designed to enhance individual agent specialization in multiagent systems. CAM employs a two-phase process, combining centralized population training with individual specialization through comparative advantage maximization. CAM achieved a 13.2% improvement in individual agent performance and a 14.9% increase in behavioral diversity compared to state-of-the-art systems. The success of CAM highlights the importance of individual agent specialization, suggesting new directions for multi-agent system development.<|reference_end|> | arxiv | @article{juang2024breaking,
title={Breaking the mold: The challenge of large scale MARL specialization},
author={Stefan Juang, Hugh Cao, Arielle Zhou, Ruochen Liu, Nevin L. Zhang and
Elvis Liu},
journal={arXiv preprint arXiv:2410.02128},
year={2024},
archivePrefix={arXiv},
eprint={2410.02128},
primaryClass={cs.LG}
} | juang2024breaking |
arxiv-664856 | 2410.02129 | DMC-Net: Lightweight Dynamic Multi-Scale and Multi-Resolution Convolution Network for Pancreas Segmentation in CT Images | <|reference_start|>DMC-Net: Lightweight Dynamic Multi-Scale and Multi-Resolution Convolution Network for Pancreas Segmentation in CT Images: Convolutional neural networks (CNNs) have shown great effectiveness in medical image segmentation. However, they may be limited in modeling large inter-subject variations in organ shapes and sizes and exploiting global long-range contextual information. This is because CNNs typically employ convolutions with fixed-sized local receptive fields and lack the mechanisms to utilize global information. To address these limitations, we developed Dynamic Multi-Resolution Convolution (DMRC) and Dynamic Multi-Scale Convolution (DMSC) modules. Both modules enhance the representation capabilities of single convolutions to capture varying scaled features and global contextual information. This is achieved in the DMRC module by employing a convolutional filter on images with different resolutions and subsequently utilizing dynamic mechanisms to model global inter-dependencies between features. In contrast, the DMSC module extracts features at different scales by employing convolutions with different kernel sizes and utilizing dynamic mechanisms to extract global contextual information. The utilization of convolutions with different kernel sizes in the DMSC module may increase computational complexity. To lessen this burden, we propose to use a lightweight design for convolution layers with a large kernel size. Thus, DMSC and DMRC modules are designed as lightweight drop-in replacements for single convolutions, and they can be easily integrated into general CNN architectures for end-to-end training. The segmentation network was proposed by incorporating our DMSC and DMRC modules into a standard U-Net architecture, termed Dynamic Multi-scale and Multi-resolution Convolution network (DMC-Net). The results demonstrate that our proposed DMSC and DMRC can enhance the representation capabilities of single convolutions and improve segmentation accuracy.<|reference_end|> | arxiv | @article{yang2024dmc-net:,
title={DMC-Net: Lightweight Dynamic Multi-Scale and Multi-Resolution
Convolution Network for Pancreas Segmentation in CT Images},
author={Jin Yang, Daniel S. Marcus and Aristeidis Sotiras},
journal={arXiv preprint arXiv:2410.02129},
year={2024},
archivePrefix={arXiv},
eprint={2410.02129},
primaryClass={eess.IV cs.CV}
} | yang2024dmc-net: |
arxiv-664857 | 2410.02130 | MDSGen: Fast and Efficient Masked Diffusion Temporal-Aware Transformers for Open-Domain Sound Generation | <|reference_start|>MDSGen: Fast and Efficient Masked Diffusion Temporal-Aware Transformers for Open-Domain Sound Generation: We introduce MDSGen, a novel framework for vision-guided open-domain sound generation optimized for model parameter size, memory consumption, and inference speed. This framework incorporates two key innovations: (1) a redundant video feature removal module that filters out unnecessary visual information, and (2) a temporal-aware masking strategy that leverages temporal context for enhanced audio generation accuracy. In contrast to existing resource-heavy Unet-based models, MDSGen employs denoising masked diffusion transformers, facilitating efficient generation without reliance on pre-trained diffusion models. Evaluated on the benchmark VGGSound dataset, our smallest model (5M parameters) achieves 97.9% alignment accuracy, using 172x fewer parameters, 371% less memory, and offering 36x faster inference than the current 860M-parameter state-of-the-art model (93.9% accuracy). The larger model (131M parameters) reaches nearly 99% accuracy while requiring 6.5x fewer parameters. These results highlight the scalability and effectiveness of our approach.<|reference_end|> | arxiv | @article{pham2024mdsgen:,
title={MDSGen: Fast and Efficient Masked Diffusion Temporal-Aware Transformers
for Open-Domain Sound Generation},
author={Trung X. Pham, Tri Ton, Chang D. Yoo},
journal={arXiv preprint arXiv:2410.02130},
year={2024},
archivePrefix={arXiv},
eprint={2410.02130},
primaryClass={cs.SD cs.CV eess.AS}
} | pham2024mdsgen: |
arxiv-664858 | 2410.02131 | C-MELT: Contrastive Enhanced Masked Auto-Encoders for ECG-Language Pre-Training | <|reference_start|>C-MELT: Contrastive Enhanced Masked Auto-Encoders for ECG-Language Pre-Training: Accurate interpretation of Electrocardiogram (ECG) signals is pivotal for diagnosing cardiovascular diseases. Integrating ECG signals with their accompanying textual reports holds immense potential to enhance clinical diagnostics through the combination of physiological data and qualitative insights. However, this integration faces significant challenges due to inherent modality disparities and the scarcity of labeled data for robust cross-modal learning. To address these obstacles, we propose C-MELT, a novel framework that pre-trains ECG and text data using a contrastive masked auto-encoder architecture. C-MELT uniquely combines the strengths of generative with enhanced discriminative capabilities to achieve robust cross-modal representations. This is accomplished through masked modality modeling, specialized loss functions, and an improved negative sampling strategy tailored for cross-modal alignment. Extensive experiments on five public datasets across diverse downstream tasks demonstrate that C-MELT significantly outperforms existing methods, achieving 15% and 2% increases in linear probing and zero-shot performance over state-of-the-art models, respectively. These results highlight the effectiveness of C-MELT, underscoring its potential to advance automated clinical diagnostics through multi-modal representations.<|reference_end|> | arxiv | @article{pham2024c-melt:,
title={C-MELT: Contrastive Enhanced Masked Auto-Encoders for ECG-Language
Pre-Training},
author={Manh Pham, Aaqib Saeed, Dong Ma},
journal={arXiv preprint arXiv:2410.02131},
year={2024},
archivePrefix={arXiv},
eprint={2410.02131},
primaryClass={cs.LG cs.CL}
} | pham2024c-melt: |
arxiv-664859 | 2410.02132 | Nonuniform random feature models using derivative information | <|reference_start|>Nonuniform random feature models using derivative information: We propose nonuniform data-driven parameter distributions for neural network initialization based on derivative data of the function to be approximated. These parameter distributions are developed in the context of non-parametric regression models based on shallow neural networks, and compare favorably to well-established uniform random feature models based on conventional weight initialization. We address the cases of Heaviside and ReLU activation functions, and their smooth approximations (sigmoid and softplus), and use recent results on the harmonic analysis and sparse representation of neural networks resulting from fully trained optimal networks. Extending analytic results that give exact representation, we obtain densities that concentrate in regions of the parameter space corresponding to neurons that are well suited to model the local derivatives of the unknown function. Based on these results, we suggest simplifications of these exact densities based on approximate derivative data in the input points that allow for very efficient sampling and lead to performance of random feature models close to optimal networks in several scenarios.<|reference_end|> | arxiv | @article{pieper2024nonuniform,
title={Nonuniform random feature models using derivative information},
author={Konstantin Pieper and Zezhong Zhang and Guannan Zhang},
journal={arXiv preprint arXiv:2410.02132},
year={2024},
archivePrefix={arXiv},
eprint={2410.02132},
primaryClass={cs.LG cs.NA math.NA}
} | pieper2024nonuniform |
arxiv-664860 | 2410.02133 | TrajGPT: Irregular Time-Series Representation Learning for Health Trajectory Analysis | <|reference_start|>TrajGPT: Irregular Time-Series Representation Learning for Health Trajectory Analysis: In many domains, such as healthcare, time-series data is often irregularly sampled with varying intervals between observations. This poses challenges for classical time-series models that require equally spaced data. To address this, we propose a novel time-series Transformer called Trajectory Generative Pre-trained Transformer (TrajGPT). TrajGPT employs a novel Selective Recurrent Attention (SRA) mechanism, which utilizes a data-dependent decay to adaptively filter out irrelevant past information based on contexts. By interpreting TrajGPT as discretized ordinary differential equations (ODEs), it effectively captures the underlying continuous dynamics and enables time-specific inference for forecasting arbitrary target timesteps. Experimental results demonstrate that TrajGPT excels in trajectory forecasting, drug usage prediction, and phenotype classification without requiring task-specific fine-tuning. By evolving the learned continuous dynamics, TrajGPT can interpolate and extrapolate disease risk trajectories from partially-observed time series. The visualization of predicted health trajectories shows that TrajGPT forecasts unseen diseases based on the history of clinically relevant phenotypes (i.e., contexts).<|reference_end|> | arxiv | @article{song2024trajgpt:,
title={TrajGPT: Irregular Time-Series Representation Learning for Health
Trajectory Analysis},
author={Ziyang Song, Qingcheng Lu, He Zhu, David Buckeridge, Yue Li},
journal={arXiv preprint arXiv:2410.02133},
year={2024},
archivePrefix={arXiv},
eprint={2410.02133},
primaryClass={cs.LG}
} | song2024trajgpt: |
arxiv-664861 | 2410.02136 | Disentangled Representation Learning for Parametric Partial Differential Equations | <|reference_start|>Disentangled Representation Learning for Parametric Partial Differential Equations: Neural operators (NOs) have demonstrated remarkable success in learning mappings between function spaces, serving as efficient approximators for the forward solutions of complex physical systems governed by partial differential equations (PDEs). However, while effective as black-box solvers, they offer limited insight into the underlying physical mechanism, due to the lack of interpretable representations of the physical parameters that drive the system. To tackle this challenge, we propose a new paradigm for learning disentangled representations from neural operator parameters, thereby effectively solving an inverse problem. Specifically, we introduce DisentangO, a novel hyper-neural operator architecture designed to unveil and disentangle the latent physical factors of variation embedded within the black-box neural operator parameters. At the core of DisentangO is a multi-task neural operator architecture that distills the varying parameters of the governing PDE through a task-wise adaptive layer, coupled with a hierarchical variational autoencoder that disentangles these variations into identifiable latent factors. By learning these disentangled representations, our model not only enhances physical interpretability but also enables more robust generalization across diverse physical systems. Empirical evaluations across supervised, semi-supervised, and unsupervised learning contexts show that DisentangO effectively extracts meaningful and interpretable latent features, bridging the divide between predictive performance and physical understanding in neural operator frameworks.<|reference_end|> | arxiv | @article{liu2024disentangled,
title={Disentangled Representation Learning for Parametric Partial Differential
Equations},
author={Ning Liu, Lu Zhang, Tian Gao, Yue Yu},
journal={arXiv preprint arXiv:2410.02136},
year={2024},
archivePrefix={arXiv},
eprint={2410.02136},
primaryClass={cs.LG}
} | liu2024disentangled |
arxiv-664862 | 2410.02140 | A Formal Framework for Understanding Length Generalization in Transformers | <|reference_start|>A Formal Framework for Understanding Length Generalization in Transformers: A major challenge for transformers is generalizing to sequences longer than those observed during training. While previous works have empirically shown that transformers can either succeed or fail at length generalization depending on the task, theoretical understanding of this phenomenon remains limited. In this work, we introduce a rigorous theoretical framework to analyze length generalization in causal transformers with learnable absolute positional encodings. In particular, we characterize those functions that are identifiable in the limit from sufficiently long inputs with absolute positional encodings under an idealized inference scheme using a norm-based regularizer. This enables us to prove the possibility of length generalization for a rich family of problems. We experimentally validate the theory as a predictor of success and failure of length generalization across a range of algorithmic and formal language tasks. Our theory not only explains a broad set of empirical observations but also opens the way to provably predicting length generalization capabilities in transformers.<|reference_end|> | arxiv | @article{huang2024a,
title={A Formal Framework for Understanding Length Generalization in
Transformers},
author={Xinting Huang, Andy Yang, Satwik Bhattamishra, Yash Sarrof, Andreas
Krebs, Hattie Zhou, Preetum Nakkiran, Michael Hahn},
journal={arXiv preprint arXiv:2410.02140},
year={2024},
archivePrefix={arXiv},
eprint={2410.02140},
primaryClass={cs.LG}
} | huang2024a |
arxiv-664863 | 2410.02141 | E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic Whole-Body Control Framework | <|reference_start|>E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic Whole-Body Control Framework: Recent advancements in humanoid robotics, including the integration of hierarchical reinforcement learning-based control and the utilization of LLM planning, have significantly enhanced the ability of robots to perform complex tasks. In contrast to the highly developed humanoid robots, the human factors involved remain relatively unexplored. Directly controlling humanoid robots with the brain has already appeared in many science fiction novels, such as Pacific Rim and Gundam. In this work, we present E2H (EEG-to-Humanoid), an innovative framework that pioneers the control of humanoid robots using high-frequency non-invasive neural signals. As the none-invasive signal quality remains low in decoding precise spatial trajectory, we decompose the E2H framework in an innovative two-stage formation: 1) decoding neural signals (EEG) into semantic motion keywords, 2) utilizing LLM facilitated motion generation with a precise motion imitation control policy to realize humanoid robotics control. The method of directly driving robots with brainwave commands offers a novel approach to human-machine collaboration, especially in situations where verbal commands are impractical, such as in cases of speech impairments, space exploration, or underwater exploration, unlocking significant potential. E2H offers an exciting glimpse into the future, holding immense potential for human-computer interaction.<|reference_end|> | arxiv | @article{duan2024e2h:,
title={E2H: A Two-Stage Non-Invasive Neural Signal Driven Humanoid Robotic
Whole-Body Control Framework},
author={Yiqun Duan, Qiang Zhang, Jinzhao Zhou, Jingkai Sun, Xiaowei Jiang,
Jiahang Cao, Jiaxu Wang, Yiqian Yang, Wen Zhao, Gang Han, Yijie Guo,
Chin-Teng Lin},
journal={arXiv preprint arXiv:2410.02141},
year={2024},
archivePrefix={arXiv},
eprint={2410.02141},
primaryClass={cs.RO cs.HC}
} | duan2024e2h: |
arxiv-664864 | 2410.02142 | A Miniature Potentiostat for Impedance Spectroscopy and Cyclic Voltammetry in Wearable Sensor Integration | <|reference_start|>A Miniature Potentiostat for Impedance Spectroscopy and Cyclic Voltammetry in Wearable Sensor Integration: A potentiostat is an analytical device and a crucial component in electrochemical instruments used for studying chemical reaction mechanisms, with potential applications in early diagnosis of disease or critical health conditions. Conventional potentiostats are typically benchtop devices designed for laboratory use, whereas a wearable potentiostat can be interfaced with biochemical sensors for disease diagnostics at home. This work presents a low-power potentiostat designed to connect with a sensor array consisting of eight to ten working electrodes. The potentiostat is capable of running Electrochemical Impedance Spectroscopy and Cyclic Voltammetry. The system is powered by lithium-ion batteries and uses Bluetooth for data transmission to the user. A single ARM M4 microcontroller, integrated with a Bluetooth low-energy radio module, controls the entire system. The accuracy, reliability, and power efficiency of the potentiostat were evaluated and compared against existing commercial benchtop potentiostats. Additionally, we have outlined future steps to enhance circuit miniaturization and power efficiency, aiming to develop fully integrated wearable sensing devices comparable in size to a wristwatch.<|reference_end|> | arxiv | @article{franulovic2024a,
title={A Miniature Potentiostat for Impedance Spectroscopy and Cyclic
Voltammetry in Wearable Sensor Integration},
author={Franci Franulovic, Shawana Tabassum},
journal={arXiv preprint arXiv:2410.02142},
year={2024},
archivePrefix={arXiv},
eprint={2410.02142},
primaryClass={eess.SY cs.SY}
} | franulovic2024a |
arxiv-664865 | 2410.02143 | Plug-and-Play Controllable Generation for Discrete Masked Models | <|reference_start|>Plug-and-Play Controllable Generation for Discrete Masked Models: This article makes discrete masked models for the generative modeling of discrete data controllable. The goal is to generate samples of a discrete random variable that adheres to a posterior distribution, satisfies specific constraints, or optimizes a reward function. This methodological development enables broad applications across downstream tasks such as class-specific image generation and protein design. Existing approaches for controllable generation of masked models typically rely on task-specific fine-tuning or additional modifications, which can be inefficient and resource-intensive. To overcome these limitations, we propose a novel plug-and-play framework based on importance sampling that bypasses the need for training a conditional score. Our framework is agnostic to the choice of control criteria, requires no gradient information, and is well-suited for tasks such as posterior sampling, Bayesian inverse problems, and constrained generation. We demonstrate the effectiveness of our approach through extensive experiments, showcasing its versatility across multiple domains, including protein design.<|reference_end|> | arxiv | @article{guo2024plug-and-play,
title={Plug-and-Play Controllable Generation for Discrete Masked Models},
author={Wei Guo, Yuchen Zhu, Molei Tao, Yongxin Chen},
journal={arXiv preprint arXiv:2410.02143},
year={2024},
archivePrefix={arXiv},
eprint={2410.02143},
primaryClass={cs.LG stat.ML}
} | guo2024plug-and-play |
arxiv-664866 | 2410.02144 | SoundMorpher: Perceptually-Uniform Sound Morphing with Diffusion Model | <|reference_start|>SoundMorpher: Perceptually-Uniform Sound Morphing with Diffusion Model: We present SoundMorpher, a sound morphing method that generates perceptually uniform morphing trajectories using a diffusion model. Traditional sound morphing methods models the intractable relationship between morph factor and perception of the stimuli for resulting sounds under a linear assumption, which oversimplifies the complex nature of sound perception and limits their morph quality. In contrast, SoundMorpher explores an explicit proportional mapping between the morph factor and the perceptual stimuli of morphed sounds based on Mel-spectrogram. This approach enables smoother transitions between intermediate sounds and ensures perceptually consistent transformations, which can be easily extended to diverse sound morphing tasks. Furthermore, we present a set of quantitative metrics to comprehensively assess sound morphing systems based on three objective criteria, namely, correspondence, perceptual intermediateness, and smoothness. We provide extensive experiments to demonstrate the effectiveness and versatility of SoundMorpher in real-world scenarios, highlighting its potential impact on various applications such as creative music composition, film post-production and interactive audio technologies.<|reference_end|> | arxiv | @article{niu2024soundmorpher:,
title={SoundMorpher: Perceptually-Uniform Sound Morphing with Diffusion Model},
author={Xinlei Niu, Jing Zhang, Charles Patrick Martin},
journal={arXiv preprint arXiv:2410.02144},
year={2024},
archivePrefix={arXiv},
eprint={2410.02144},
primaryClass={cs.SD cs.LG eess.AS}
} | niu2024soundmorpher: |
arxiv-664867 | 2410.02145 | Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes | <|reference_start|>Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes: Active learning methods aim to improve sample complexity in machine learning. In this work, we investigate an active learning scheme via a novel gradient-free cutting-plane training method for ReLU networks of arbitrary depth. We demonstrate, for the first time, that cutting-plane algorithms, traditionally used in linear models, can be extended to deep neural networks despite their nonconvexity and nonlinear decision boundaries. Our results demonstrate that these methods provide a promising alternative to the commonly employed gradient-based optimization techniques in large-scale neural networks. Moreover, this training method induces the first deep active learning scheme known to achieve convergence guarantees. We exemplify the effectiveness of our proposed active learning method against popular deep active learning baselines via both synthetic data experiments and sentimental classification task on real datasets.<|reference_end|> | arxiv | @article{zhang2024active,
title={Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes},
author={Erica Zhang, Fangzhao Zhang, Mert Pilanci},
journal={arXiv preprint arXiv:2410.02145},
year={2024},
archivePrefix={arXiv},
eprint={2410.02145},
primaryClass={cs.LG math.OC}
} | zhang2024active |
arxiv-664868 | 2410.02147 | Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement | <|reference_start|>Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement: In this paper, we propose a framework for efficient Source-Free Domain Adaptation (SFDA) in the context of time-series, focusing on enhancing both parameter efficiency and data-sample utilization. Our approach introduces an improved paradigm for source-model preparation and target-side adaptation, aiming to enhance training efficiency during target adaptation. Specifically, we reparameterize the source model's weights in a Tucker-style decomposed manner, factorizing the model into a compact form during the source model preparation phase. During target-side adaptation, only a subset of these decomposed factors is fine-tuned, leading to significant improvements in training efficiency. We demonstrate using PAC Bayesian analysis that this selective fine-tuning strategy implicitly regularizes the adaptation process by constraining the model's learning capacity. Furthermore, this re-parameterization reduces the overall model size and enhances inference efficiency, making the approach particularly well suited for resource-constrained devices. Additionally, we demonstrate that our framework is compatible with various SFDA methods and achieves significant computational efficiency, reducing the number of fine-tuned parameters and inference overhead in terms of MACs by over 90% while maintaining model performance.<|reference_end|> | arxiv | @article{patel2024efficient,
title={Efficient Source-Free Time-Series Adaptation via Parameter Subspace
Disentanglement},
author={Gaurav Patel, Christopher Sandino, Behrooz Mahasseni, Ellen L Zippi,
Erdrin Azemi, Ali Moin, Juri Minxha},
journal={arXiv preprint arXiv:2410.02147},
year={2024},
archivePrefix={arXiv},
eprint={2410.02147},
primaryClass={cs.LG cs.AI eess.SP}
} | patel2024efficient |
arxiv-664869 | 2410.02148 | Reducing Warning Errors in Driver Support with Personalized Risk Maps | <|reference_start|>Reducing Warning Errors in Driver Support with Personalized Risk Maps: We consider the problem of human-focused driver support. State-of-the-art personalization concepts allow to estimate parameters for vehicle control systems or driver models. However, there are currently few approaches proposed that use personalized models and evaluate the effectiveness in the form of general risk warning. In this paper, we therefore propose a warning system that estimates a personalized risk factor for the given driver based on the driver's behavior. The system afterwards is able to adapt the warning signal with personalized Risk Maps. In experiments, we show examples for longitudinal following and intersection scenarios in which the novel warning system can effectively reduce false negative errors and false positive errors compared to a baseline approach which does not use personalized driver considerations. This underlines the potential of personalization for reducing warning errors in risk warning and driver support.<|reference_end|> | arxiv | @article{puphal2024reducing,
title={Reducing Warning Errors in Driver Support with Personalized Risk Maps},
author={Tim Puphal, Ryohei Hirano, Takayuki Kawabuchi, Akihito Kimata and
Julian Eggert},
journal={arXiv preprint arXiv:2410.02148},
year={2024},
archivePrefix={arXiv},
eprint={2410.02148},
primaryClass={cs.RO cs.LG}
} | puphal2024reducing |
arxiv-664870 | 2410.02149 | Matrix and Relative Weak Crossover in Japanese: An Experimental Investigation | <|reference_start|>Matrix and Relative Weak Crossover in Japanese: An Experimental Investigation: This paper provides evidence that weak crossover effects differ in nature between matrix and relative clauses. Fukushima et al. (2024) provided similar evidence, showing that, when various non-structural factors were eliminated English speakers never accepted matrix weak crossover cases, but often accepted relative weak crossover ones. Those results were limited, however, by English word order, which lead to uncertainty as to whether this difference was due to the effects of linear precedence or syntactic structure. In this paper, to distinguish between these two possibilities, we conduct an experiment using Japanese, which lacks the word-order confound that English had. We find results that are qualitatively in line with Fukushima et al. (2024) suggesting that the relevant distinction is structural and not based simply on precedence.<|reference_end|> | arxiv | @article{fukushima2024matrix,
title={Matrix and Relative Weak Crossover in Japanese: An Experimental
Investigation},
author={Haruka Fukushima, Daniel Plesniak, and Daisuke Bekki},
journal={arXiv preprint arXiv:2410.02149},
year={2024},
archivePrefix={arXiv},
eprint={2410.02149},
primaryClass={cs.CL}
} | fukushima2024matrix |
arxiv-664871 | 2410.02151 | Quantitative Approximation for Neural Operators in Nonlinear Parabolic Equations | <|reference_start|>Quantitative Approximation for Neural Operators in Nonlinear Parabolic Equations: Neural operators serve as universal approximators for general continuous operators. In this paper, we derive the approximation rate of solution operators for the nonlinear parabolic partial differential equations (PDEs), contributing to the quantitative approximation theorem for solution operators of nonlinear PDEs. Our results show that neural operators can efficiently approximate these solution operators without the exponential growth in model complexity, thus strengthening the theoretical foundation of neural operators. A key insight in our proof is to transfer PDEs into the corresponding integral equations via Duahamel's principle, and to leverage the similarity between neural operators and Picard's iteration, a classical algorithm for solving PDEs. This approach is potentially generalizable beyond parabolic PDEs to a range of other equations, including the Navier-Stokes equation, nonlinear Schr\"odinger equations and nonlinear wave equations, which can be solved by Picard's iteration.<|reference_end|> | arxiv | @article{furuya2024quantitative,
title={Quantitative Approximation for Neural Operators in Nonlinear Parabolic
Equations},
author={Takashi Furuya, Koichi Taniguchi, Satoshi Okuda},
journal={arXiv preprint arXiv:2410.02151},
year={2024},
archivePrefix={arXiv},
eprint={2410.02151},
primaryClass={cs.LG cs.NA math.NA stat.ML}
} | furuya2024quantitative |
arxiv-664872 | 2410.02152 | An Evaluation of Large Pre-Trained Models for Gesture Recognition using Synthetic Videos | <|reference_start|>An Evaluation of Large Pre-Trained Models for Gesture Recognition using Synthetic Videos: In this work, we explore the possibility of using synthetically generated data for video-based gesture recognition with large pre-trained models. We consider whether these models have sufficiently robust and expressive representation spaces to enable "training-free" classification. Specifically, we utilize various state-of-the-art video encoders to extract features for use in k-nearest neighbors classification, where the training data points are derived from synthetic videos only. We compare these results with another training-free approach -- zero-shot classification using text descriptions of each gesture. In our experiments with the RoCoG-v2 dataset, we find that using synthetic training videos yields significantly lower classification accuracy on real test videos compared to using a relatively small number of real training videos. We also observe that video backbones that were fine-tuned on classification tasks serve as superior feature extractors, and that the choice of fine-tuning data has a substantial impact on k-nearest neighbors performance. Lastly, we find that zero-shot text-based classification performs poorly on the gesture recognition task, as gestures are not easily described through natural language.<|reference_end|> | arxiv | @article{reddy2024an,
title={An Evaluation of Large Pre-Trained Models for Gesture Recognition using
Synthetic Videos},
author={Arun Reddy, Ketul Shah, Corban Rivera, William Paul, Celso M. De Melo,
Rama Chellappa},
journal={Synthetic Data for Artificial Intelligence and Machine Learning:
Tools, Techniques, and Applications II. Vol. 13035. SPIE, 2024},
year={2024},
doi={10.1117/12.3013530},
archivePrefix={arXiv},
eprint={2410.02152},
primaryClass={cs.CV}
} | reddy2024an |
arxiv-664873 | 2410.02154 | Guaranteed-Safe MPPI Through Composite Control Barrier Functions for Efficient Sampling in Multi-Constrained Robotic Systems | <|reference_start|>Guaranteed-Safe MPPI Through Composite Control Barrier Functions for Efficient Sampling in Multi-Constrained Robotic Systems: We present a new guaranteed-safe model predictive path integral (GS-MPPI) control algorithm that enhances sample efficiency in nonlinear systems with multiple safety constraints. The approach use a composite control barrier function (CBF) along with MPPI to ensure all sampled trajectories are provably safe. We first construct a single CBF constraint from multiple safety constraints with potentially differing relative degrees, using it to create a safe closed-form control law. This safe control is then integrated into the system dynamics, allowing MPPI to optimize over exclusively safe trajectories. The method not only improves computational efficiency but also addresses the myopic behavior often associated with CBFs by incorporating long-term performance considerations. We demonstrate the algorithm's effectiveness through simulations of a nonholonomic ground robot subject to position and speed constraints, showcasing safety and performance.<|reference_end|> | arxiv | @article{rabiee2024guaranteed-safe,
title={Guaranteed-Safe MPPI Through Composite Control Barrier Functions for
Efficient Sampling in Multi-Constrained Robotic Systems},
author={Pedram Rabiee, Jesse B. Hoagg},
journal={arXiv preprint arXiv:2410.02154},
year={2024},
archivePrefix={arXiv},
eprint={2410.02154},
primaryClass={eess.SY cs.SY}
} | rabiee2024guaranteed-safe |
arxiv-664874 | 2410.02155 | From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities | <|reference_start|>From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities: Multimodal Large Language Models have made significant strides in integrating visual and textual information, yet they often struggle with effectively aligning these modalities. We introduce a novel image tokenizer that bridges this gap by applying the principle of Byte-Pair Encoding (BPE) to visual data. Unlike conventional approaches that rely on separate visual encoders, our method directly incorporates structural prior information into image tokens, mirroring the successful tokenization strategies used in text-only Large Language Models. This innovative approach enables Transformer models to more effectively learn and reason across modalities. Through theoretical analysis and extensive experiments, we demonstrate that our BPE Image Tokenizer significantly enhances MLLMs' multimodal understanding capabilities, even with limited training data. Our method not only improves performance across various benchmarks but also shows promising scalability, potentially paving the way for more efficient and capable multimodal foundation models.<|reference_end|> | arxiv | @article{zhang2024from,
title={From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities},
author={Wanpeng Zhang, Zilong Xie, Yicheng Feng, Yijiang Li, Xingrun Xing,
Sipeng Zheng, Zongqing Lu},
journal={arXiv preprint arXiv:2410.02155},
year={2024},
archivePrefix={arXiv},
eprint={2410.02155},
primaryClass={cs.AI cs.CL cs.CV}
} | zhang2024from |
arxiv-664875 | 2410.02156 | The why, what, and how of AI-based coding in scientific research | <|reference_start|>The why, what, and how of AI-based coding in scientific research: Computer programming (coding) is indispensable for researchers across disciplines, yet it remains challenging to learn and time-consuming to carry out. Generative AI, particularly large language models (LLMs), has the potential to transform coding into intuitive conversations, but best practices and effective workflows are only emerging. We dissect AI-based coding through three key lenses: the nature and role of LLMs in coding (why), six types of coding assistance they provide (what), and a five-step workflow in action with practical implementation strategies (how). Additionally, we address the limitations and future outlook of AI in coding. By offering actionable insights, this framework helps to guide researchers in effectively leveraging AI to enhance coding practices and education, accelerating scientific progress.<|reference_end|> | arxiv | @article{zhuang2024the,
title={The why, what, and how of AI-based coding in scientific research},
author={Tonghe Zhuang and Zhicheng Lin},
journal={arXiv preprint arXiv:2410.02156},
year={2024},
archivePrefix={arXiv},
eprint={2410.02156},
primaryClass={cs.CY cs.AI cs.CL cs.PL}
} | zhuang2024the |
arxiv-664876 | 2410.02158 | ClassContrast: Bridging the Spatial and Contextual Gaps for Node Representations | <|reference_start|>ClassContrast: Bridging the Spatial and Contextual Gaps for Node Representations: Graph Neural Networks (GNNs) have revolutionized the domain of graph representation learning by utilizing neighborhood aggregation schemes in many popular architectures, such as message passing graph neural networks (MPGNNs). This scheme involves iteratively calculating a node's representation vector by aggregating and transforming the representation vectors of its adjacent nodes. Despite their effectiveness, MPGNNs face significant issues, such as oversquashing, oversmoothing, and underreaching, which hamper their effectiveness. Additionally, the reliance of MPGNNs on the homophily assumption, where edges typically connect nodes with similar labels and features, limits their performance in heterophilic contexts, where connected nodes often have significant differences. This necessitates the development of models that can operate effectively in both homophilic and heterophilic settings. In this paper, we propose a novel approach, ClassContrast, grounded in Energy Landscape Theory from Chemical Physics, to overcome these limitations. ClassContrast combines spatial and contextual information, leveraging a physics-inspired energy landscape to model node embeddings that are both discriminative and robust across homophilic and heterophilic settings. Our approach introduces contrast-based homophily matrices to enhance the understanding of class interactions and tendencies. Through extensive experiments, we demonstrate that ClassContrast outperforms traditional GNNs in node classification and link prediction tasks, proving its effectiveness and versatility in diverse real-world scenarios.<|reference_end|> | arxiv | @article{uddin2024classcontrast:,
title={ClassContrast: Bridging the Spatial and Contextual Gaps for Node
Representations},
author={Md Joshem Uddin, Astrit Tola, Varin Sikand, Cuneyt Gurcan Akcora,
Baris Coskunuzer},
journal={arXiv preprint arXiv:2410.02158},
year={2024},
archivePrefix={arXiv},
eprint={2410.02158},
primaryClass={cs.LG cs.CG stat.ML}
} | uddin2024classcontrast: |
arxiv-664877 | 2410.02159 | Mitigating Memorization In Language Models | <|reference_start|>Mitigating Memorization In Language Models: Language models (LMs) can "memorize" information, i.e., encode training data in their weights in such a way that inference-time queries can lead to verbatim regurgitation of that data. This ability to extract training data can be problematic, for example, when data are private or sensitive. In this work, we investigate methods to mitigate memorization: three regularizer-based, three finetuning-based, and eleven machine unlearning-based methods, with five of the latter being new methods that we introduce. We also introduce TinyMem, a suite of small, computationally-efficient LMs for the rapid development and evaluation of memorization-mitigation methods. We demonstrate that the mitigation methods that we develop using TinyMem can successfully be applied to production-grade LMs, and we determine via experiment that: regularizer-based mitigation methods are slow and ineffective at curbing memorization; fine-tuning-based methods are effective at curbing memorization, but overly expensive, especially for retaining higher accuracies; and unlearning-based methods are faster and more effective, allowing for the precise localization and removal of memorized information from LM weights prior to inference. We show, in particular, that our proposed unlearning method BalancedSubnet outperforms other mitigation methods at removing memorized information while preserving performance on target tasks.<|reference_end|> | arxiv | @article{sakarvadia2024mitigating,
title={Mitigating Memorization In Language Models},
author={Mansi Sakarvadia, Aswathy Ajith, Arham Khan, Nathaniel Hudson, Caleb
Geniesse, Kyle Chard, Yaoqing Yang, Ian Foster, Michael W. Mahoney},
journal={arXiv preprint arXiv:2410.02159},
year={2024},
archivePrefix={arXiv},
eprint={2410.02159},
primaryClass={cs.LG cs.AI cs.CL}
} | sakarvadia2024mitigating |
arxiv-664878 | 2410.02160 | RiskSEA : A Scalable Graph Embedding for Detecting On-chain Fraudulent Activities on the Ethereum Blockchain | <|reference_start|>RiskSEA : A Scalable Graph Embedding for Detecting On-chain Fraudulent Activities on the Ethereum Blockchain: Like any other useful technology, cryptocurrencies are sometimes used for criminal activities. While transactions are recorded on the blockchain, there exists a need for a more rapid and scalable method to detect addresses associated with fraudulent activities. We present RiskSEA, a scalable risk scoring system capable of effectively handling the dynamic nature of large-scale blockchain transaction graphs. The risk scoring system, which we implement for Ethereum, consists of 1. a scalable approach to generating node2vec embedding for entire set of addresses to capture the graph topology 2. transaction-based features to capture the transactional behavioral pattern of an address 3. a classifier model to generate risk score for addresses that combines the node2vec embedding and behavioral features. Efficiently generating node2vec embedding for large scale and dynamically evolving blockchain transaction graphs is challenging, we present two novel approaches for generating node2vec embeddings and effectively scaling it to the entire set of blockchain addresses: 1. node2vec embedding propagation and 2. dynamic node2vec embedding. We present a comprehensive analysis of the proposed approaches. Our experiments show that combining both behavioral and node2vec features boosts the classification performance significantly, and that the dynamic node2vec embeddings perform better than the node2vec propagated embeddings.<|reference_end|> | arxiv | @article{agarwal2024risksea,
title={RiskSEA : A Scalable Graph Embedding for Detecting On-chain Fraudulent
Activities on the Ethereum Blockchain},
author={Ayush Agarwal, Lv Lu, Arjun Maheswaran, Varsha Mahadevan and Bhaskar
Krishnamachari},
journal={arXiv preprint arXiv:2410.02160},
year={2024},
archivePrefix={arXiv},
eprint={2410.02160},
primaryClass={cs.CR cs.AI cs.LG}
} | agarwal2024risksea |
arxiv-664879 | 2410.02162 | Planning in Strawberry Fields: Evaluating and Improving the Planning and Scheduling Capabilities of LRM o1 | <|reference_start|>Planning in Strawberry Fields: Evaluating and Improving the Planning and Scheduling Capabilities of LRM o1: The ability to plan a course of action that achieves a desired state of affairs has long been considered a core competence of intelligent agents and has been an integral part of AI research since its inception. With the advent of large language models (LLMs), there has been considerable interest in the question of whether or not they possess such planning abilities, but -- despite the slew of new private and open source LLMs since GPT3 -- progress has remained slow. OpenAI claims that their recent o1 (Strawberry) model has been specifically constructed and trained to escape the normal limitations of autoregressive LLMs -- making it a new kind of model: a Large Reasoning Model (LRM). In this paper, we evaluate the planning capabilities of two LRMs (o1-preview and o1-mini) on both planning and scheduling benchmarks. We see that while o1 does seem to offer significant improvements over autoregressive LLMs, this comes at a steep inference cost, while still failing to provide any guarantees over what it generates. We also show that combining o1 models with external verifiers -- in a so-called LRM-Modulo system -- guarantees the correctness of the combined system's output while further improving performance.<|reference_end|> | arxiv | @article{valmeekam2024planning,
title={Planning in Strawberry Fields: Evaluating and Improving the Planning and
Scheduling Capabilities of LRM o1},
author={Karthik Valmeekam, Kaya Stechly, Atharva Gundawar, Subbarao
Kambhampati},
journal={arXiv preprint arXiv:2410.02162},
year={2024},
archivePrefix={arXiv},
eprint={2410.02162},
primaryClass={cs.AI}
} | valmeekam2024planning |
arxiv-664880 | 2410.02163 | Controlled Generation of Natural Adversarial Documents for Stealthy Retrieval Poisoning | <|reference_start|>Controlled Generation of Natural Adversarial Documents for Stealthy Retrieval Poisoning: Recent work showed that retrieval based on embedding similarity (e.g., for retrieval-augmented generation) is vulnerable to poisoning: an adversary can craft malicious documents that are retrieved in response to broad classes of queries. We demonstrate that previous, HotFlip-based techniques produce documents that are very easy to detect using perplexity filtering. Even if generation is constrained to produce low-perplexity text, the resulting documents are recognized as unnatural by LLMs and can be automatically filtered from the retrieval corpus. We design, implement, and evaluate a new controlled generation technique that combines an adversarial objective (embedding similarity) with a "naturalness" objective based on soft scores computed using an open-source, surrogate LLM. The resulting adversarial documents (1) cannot be automatically detected using perplexity filtering and/or other LLMs, except at the cost of significant false positives in the retrieval corpus, yet (2) achieve similar poisoning efficacy to easily-detectable documents generated using HotFlip, and (3) are significantly more effective than prior methods for energy-guided generation, such as COLD.<|reference_end|> | arxiv | @article{zhang2024controlled,
title={Controlled Generation of Natural Adversarial Documents for Stealthy
Retrieval Poisoning},
author={Collin Zhang, Tingwei Zhang, Vitaly Shmatikov},
journal={arXiv preprint arXiv:2410.02163},
year={2024},
archivePrefix={arXiv},
eprint={2410.02163},
primaryClass={cs.CL cs.CR cs.LG}
} | zhang2024controlled |
arxiv-664881 | 2410.02164 | Universality in Transfer Learning for Linear Models | <|reference_start|>Universality in Transfer Learning for Linear Models: Transfer learning is an attractive framework for problems where there is a paucity of data, or where data collection is costly. One common approach to transfer learning is referred to as "model-based", and involves using a model that is pretrained on samples from a source distribution, which is easier to acquire, and then fine-tuning the model on a few samples from the target distribution. The hope is that, if the source and target distributions are ``close", then the fine-tuned model will perform well on the target distribution even though it has seen only a few samples from it. In this work, we study the problem of transfer learning in linear models for both regression and binary classification. In particular, we consider the use of stochastic gradient descent (SGD) on a linear model initialized with pretrained weights and using a small training data set from the target distribution. In the asymptotic regime of large models, we provide an exact and rigorous analysis and relate the generalization errors (in regression) and classification errors (in binary classification) for the pretrained and fine-tuned models. In particular, we give conditions under which the fine-tuned model outperforms the pretrained one. An important aspect of our work is that all the results are "universal", in the sense that they depend only on the first and second order statistics of the target distribution. They thus extend well beyond the standard Gaussian assumptions commonly made in the literature.<|reference_end|> | arxiv | @article{ghane2024universality,
title={Universality in Transfer Learning for Linear Models},
author={Reza Ghane, Danil Akhtiamov, Babak Hassibi},
journal={arXiv preprint arXiv:2410.02164},
year={2024},
archivePrefix={arXiv},
eprint={2410.02164},
primaryClass={cs.LG stat.ML}
} | ghane2024universality |
arxiv-664882 | 2410.02165 | A LLM-Powered Automatic Grading Framework with Human-Level Guidelines Optimization | <|reference_start|>A LLM-Powered Automatic Grading Framework with Human-Level Guidelines Optimization: Open-ended short-answer questions (SAGs) have been widely recognized as a powerful tool for providing deeper insights into learners' responses in the context of learning analytics (LA). However, SAGs often present challenges in practice due to the high grading workload and concerns about inconsistent assessments. With recent advancements in natural language processing (NLP), automatic short-answer grading (ASAG) offers a promising solution to these challenges. Despite this, current ASAG algorithms are often limited in generalizability and tend to be tailored to specific questions. In this paper, we propose a unified multi-agent ASAG framework, GradeOpt, which leverages large language models (LLMs) as graders for SAGs. More importantly, GradeOpt incorporates two additional LLM-based agents - the reflector and the refiner - into the multi-agent system. This enables GradeOpt to automatically optimize the original grading guidelines by performing self-reflection on its errors. Through experiments on a challenging ASAG task, namely the grading of pedagogical content knowledge (PCK) and content knowledge (CK) questions, GradeOpt demonstrates superior performance in grading accuracy and behavior alignment with human graders compared to representative baselines. Finally, comprehensive ablation studies confirm the effectiveness of the individual components designed in GradeOpt.<|reference_end|> | arxiv | @article{chu2024a,
title={A LLM-Powered Automatic Grading Framework with Human-Level Guidelines
Optimization},
author={Yucheng Chu, Hang Li, Kaiqi Yang, Harry Shomer, Hui Liu, Yasemin
Copur-Gencturk and Jiliang Tang},
journal={arXiv preprint arXiv:2410.02165},
year={2024},
archivePrefix={arXiv},
eprint={2410.02165},
primaryClass={cs.AI cs.CL}
} | chu2024a |
arxiv-664883 | 2410.02167 | Training Nonlinear Transformers for Chain-of-Thought Inference: A Theoretical Generalization Analysis | <|reference_start|>Training Nonlinear Transformers for Chain-of-Thought Inference: A Theoretical Generalization Analysis: Chain-of-Thought (CoT) is an efficient prompting method that enables the reasoning ability of large language models by augmenting the query using multiple examples with multiple intermediate steps. Despite the empirical success, the theoretical understanding of how to train a Transformer to achieve the CoT ability remains less explored. This is primarily due to the technical challenges involved in analyzing the nonconvex optimization on nonlinear attention models. To the best of our knowledge, this work provides the first theoretical study of training Transformers with nonlinear attention to obtain the CoT generalization capability so that the resulting model can inference on unseen tasks when the input is augmented by examples of the new task. We first quantify the required training samples and iterations to train a Transformer model towards CoT ability. We then prove the success of its CoT generalization on unseen tasks with distribution-shifted testing data. Moreover, we theoretically characterize the conditions for an accurate reasoning output by CoT even when the provided reasoning examples contain noises and are not always accurate. In contrast, in-context learning (ICL), which can be viewed as one-step CoT without intermediate steps, may fail to provide an accurate output when CoT does. These theoretical findings are justified through experiments.<|reference_end|> | arxiv | @article{li2024training,
title={Training Nonlinear Transformers for Chain-of-Thought Inference: A
Theoretical Generalization Analysis},
author={Hongkang Li, Meng Wang, Songtao Lu, Xiaodong Cui, Pin-Yu Chen},
journal={arXiv preprint arXiv:2410.02167},
year={2024},
archivePrefix={arXiv},
eprint={2410.02167},
primaryClass={cs.LG cs.CL}
} | li2024training |
arxiv-664884 | 2410.02168 | Channel-aware Contrastive Conditional Diffusion for Multivariate Probabilistic Time Series Forecasting | <|reference_start|>Channel-aware Contrastive Conditional Diffusion for Multivariate Probabilistic Time Series Forecasting: Forecasting faithful trajectories of multivariate time series from practical scopes is essential for reasonable decision-making. Recent methods majorly tailor generative conditional diffusion models to estimate the target temporal predictive distribution. However, it remains an obstacle to enhance the exploitation efficiency of given implicit temporal predictive information to bolster conditional diffusion learning. To this end, we propose a generic channel-aware Contrastive Conditional Diffusion model entitled CCDM to achieve desirable Multivariate probabilistic forecasting, obviating the need for curated temporal conditioning inductive biases. In detail, we first design a channel-centric conditional denoising network to manage intra-variate variations and cross-variate correlations, which can lead to scalability on diverse prediction horizons and channel numbers. Then, we devise an ad-hoc denoising-based temporal contrastive learning to explicitly amplify the predictive mutual information between past observations and future forecasts. It can coherently complement naive step-wise denoising diffusion training and improve the forecasting accuracy and generality on unknown test time series. Besides, we offer theoretic insights on the benefits of such auxiliary contrastive training refinement from both neural mutual information and temporal distribution generalization aspects. The proposed CCDM can exhibit superior forecasting capability compared to current state-of-the-art diffusion forecasters over a comprehensive benchmark, with best MSE and CRPS outcomes on $66.67\%$ and $83.33\%$ cases. Our code is publicly available at https://github.com/LSY-Cython/CCDM.<|reference_end|> | arxiv | @article{li2024channel-aware,
title={Channel-aware Contrastive Conditional Diffusion for Multivariate
Probabilistic Time Series Forecasting},
author={Siyang Li, Yize Chen, Hui Xiong},
journal={arXiv preprint arXiv:2410.02168},
year={2024},
archivePrefix={arXiv},
eprint={2410.02168},
primaryClass={cs.LG}
} | li2024channel-aware |
arxiv-664885 | 2410.02169 | Simulation Results of Center-Manifold-Based Identification of Polynomial Nonlinear Systems with Uncontrollable Linearization | <|reference_start|>Simulation Results of Center-Manifold-Based Identification of Polynomial Nonlinear Systems with Uncontrollable Linearization: Recently, a system identification method based on center manifold is proposed to identify polynomial nonlinear systems with uncontrollable linearization. This note presents a numerical example to show the effectiveness of this method.<|reference_end|> | arxiv | @article{huang2024simulation,
title={Simulation Results of Center-Manifold-Based Identification of Polynomial
Nonlinear Systems with Uncontrollable Linearization},
author={Chao Huang, Hao Zhang, Zhuping Wang},
journal={arXiv preprint arXiv:2410.02169},
year={2024},
archivePrefix={arXiv},
eprint={2410.02169},
primaryClass={eess.SY cs.SY}
} | huang2024simulation |
arxiv-664886 | 2410.02170 | Extracting the Potential of Emerging Hardware Accelerators for Symmetric Eigenvalue Decomposition | <|reference_start|>Extracting the Potential of Emerging Hardware Accelerators for Symmetric Eigenvalue Decomposition: Benefiting from the advancement of hardware accelerators such as GPUs, deep neural networks and scientific computing applications can achieve superior performance. Recently, the computing capacity of emerging hardware accelerators has increased rapidly, while memory bandwidth has not kept pace with this growth. This disparity exacerbates the gap between computing and memory, leading to inefficiencies on conventional algorithms, as they're likely to be converted from compute-bound to memory-bound. Symmetric eigenvalue decomposition (EVD), a critical operation in various research domains including scientific computing, deep learning training, and inference algorithms, exhibits suboptimal performance due to achieving less than 3\% hardware computing utilization on the H100 GPU. In this paper, we analyze the features of emerging hardware accelerators to identify the bottlenecks inherent in conventional EVD algorithms. To improve EVD performance, we propose several algorithmic optimizations aimed at solving the memory-bound problem and providing a better utilization of the rich computing capacity and parallelism on the emerging hardware accelerators. Experimentally, our proposed method demonstrates significant speedups on tridiagonalization, which is the main workload that takes over 90\% elapsed time of EVD, compared to the SOTA cuSOLVER tridiagonalization, achieving up to 10.1x, 7.5x, and 2.3x improvements on H100, A100, and RTX 4090 GPUs, respectively. And the end-to-end the performance of EVD solver is also up to 4.1x faster than cuSOVLER.<|reference_end|> | arxiv | @article{wang2024extracting,
title={Extracting the Potential of Emerging Hardware Accelerators for Symmetric
Eigenvalue Decomposition},
author={Hansheng Wang, Lu Shi, Zhekai duan, Panruo Wu, Liwei Guo, Shaoshuai
Zhang},
journal={arXiv preprint arXiv:2410.02170},
year={2024},
archivePrefix={arXiv},
eprint={2410.02170},
primaryClass={cs.DC}
} | wang2024extracting |
arxiv-664887 | 2410.02172 | Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation | <|reference_start|>Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation: Evaluating policies using off-policy data is crucial for applying reinforcement learning to real-world problems such as healthcare and autonomous driving. Previous methods for off-policy evaluation (OPE) generally suffer from high variance or irreducible bias, leading to unacceptably high prediction errors. In this work, we introduce STAR, a framework for OPE that encompasses a broad range of estimators -- which include existing OPE methods as special cases -- that achieve lower mean squared prediction errors. STAR leverages state abstraction to distill complex, potentially continuous problems into compact, discrete models which we call abstract reward processes (ARPs). Predictions from ARPs estimated from off-policy data are provably consistent (asymptotically correct). Rather than proposing a specific estimator, we present a new framework for OPE and empirically demonstrate that estimators within STAR outperform existing methods. The best STAR estimator outperforms baselines in all twelve cases studied, and even the median STAR estimator surpasses the baselines in seven out of the twelve cases.<|reference_end|> | arxiv | @article{chaudhari2024abstract,
title={Abstract Reward Processes: Leveraging State Abstraction for Consistent
Off-Policy Evaluation},
author={Shreyas Chaudhari, Ameet Deshpande, Bruno Castro da Silva, Philip S.
Thomas},
journal={arXiv preprint arXiv:2410.02172},
year={2024},
archivePrefix={arXiv},
eprint={2410.02172},
primaryClass={cs.LG cs.AI stat.ML}
} | chaudhari2024abstract |
arxiv-664888 | 2410.02173 | Efficiently Deploying LLMs with Controlled Risk | <|reference_start|>Efficiently Deploying LLMs with Controlled Risk: Deploying large language models in production requires simultaneous attention to efficiency and risk control. Prior work has shown the possibility to cut costs while maintaining similar accuracy, but has neglected to focus on risk control. By contrast, here we present hierarchical chains with multi-level abstention (HCMA), which use model-intrinsic uncertainty to delegate queries along the LLM intelligence hierarchy, enabling training-free model switching based solely on black-box API calls. Our framework presents novel trade-offs between efficiency and risk. For example, deploying HCMA on MMLU cuts the error rate of Llama3 405B by 30% when the model is allowed to abstain on 20% of the queries. To calibrate HCMA for optimal performance, our approach uses data-efficient logistic regressions (based on a simple nonlinear feature transformation), which require only 50 or 100 labeled examples to achieve excellent calibration error (ECE), cutting ECE by 50% compared to naive Platt scaling. On free-form generation tasks, we find that chain-of-thought is ineffectual for selective prediction, whereas zero-shot prompting drives error to 0% on TruthfulQA at high abstention rates. As LLMs are increasingly deployed across computing environments with different capabilities (such as mobile, laptop, and cloud), our framework paves the way towards maintaining deployment efficiency while putting in place sharp risk controls.<|reference_end|> | arxiv | @article{zellinger2024efficiently,
title={Efficiently Deploying LLMs with Controlled Risk},
author={Michael J. Zellinger and Matt Thomson},
journal={arXiv preprint arXiv:2410.02173},
year={2024},
archivePrefix={arXiv},
eprint={2410.02173},
primaryClass={cs.LG cs.AI}
} | zellinger2024efficiently |
arxiv-664889 | 2410.02176 | Towards Better Generalization: Weight Decay Induces Low-rank Bias for Neural Networks | <|reference_start|>Towards Better Generalization: Weight Decay Induces Low-rank Bias for Neural Networks: We study the implicit bias towards low-rank weight matrices when training neural networks (NN) with Weight Decay (WD). We prove that when a ReLU NN is sufficiently trained with Stochastic Gradient Descent (SGD) and WD, its weight matrix is approximately a rank-two matrix. Empirically, we demonstrate that WD is a necessary condition for inducing this low-rank bias across both regression and classification tasks. Our work differs from previous studies as our theoretical analysis does not rely on common assumptions regarding the training data distribution, optimality of weight matrices, or specific training procedures. Furthermore, by leveraging the low-rank bias, we derive improved generalization error bounds and provide numerical evidence showing that better generalization can be achieved. Thus, our work offers both theoretical and empirical insights into the strong generalization performance of SGD when combined with WD.<|reference_end|> | arxiv | @article{chen2024towards,
title={Towards Better Generalization: Weight Decay Induces Low-rank Bias for
Neural Networks},
author={Ke Chen, Chugang Yi, Haizhao Yang},
journal={arXiv preprint arXiv:2410.02176},
year={2024},
archivePrefix={arXiv},
eprint={2410.02176},
primaryClass={cs.LG stat.ML}
} | chen2024towards |
arxiv-664890 | 2410.02179 | HATFormer: Historic Handwritten Arabic Text Recognition with Transformers | <|reference_start|>HATFormer: Historic Handwritten Arabic Text Recognition with Transformers: Arabic handwritten text recognition (HTR) is challenging, especially for historical texts, due to diverse writing styles and the intrinsic features of Arabic script. Additionally, Arabic handwriting datasets are smaller compared to English ones, making it difficult to train generalizable Arabic HTR models. To address these challenges, we propose HATFormer, a transformer-based encoder-decoder architecture that builds on a state-of-the-art English HTR model. By leveraging the transformer's attention mechanism, HATFormer captures spatial contextual information to address the intrinsic challenges of Arabic script through differentiating cursive characters, decomposing visual representations, and identifying diacritics. Our customization to historical handwritten Arabic includes an image processor for effective ViT information preprocessing, a text tokenizer for compact Arabic text representation, and a training pipeline that accounts for a limited amount of historic Arabic handwriting data. HATFormer achieves a character error rate (CER) of 8.6% on the largest public historical handwritten Arabic dataset, with a 51% improvement over the best baseline in the literature. HATFormer also attains a comparable CER of 4.2% on the largest private non-historical dataset. Our work demonstrates the feasibility of adapting an English HTR method to a low-resource language with complex, language-specific challenges, contributing to advancements in document digitization, information retrieval, and cultural preservation.<|reference_end|> | arxiv | @article{chan2024hatformer:,
title={HATFormer: Historic Handwritten Arabic Text Recognition with
Transformers},
author={Adrian Chan, Anupam Mijar, Mehreen Saeed, Chau-Wai Wong, Akram Khater},
journal={arXiv preprint arXiv:2410.02179},
year={2024},
archivePrefix={arXiv},
eprint={2410.02179},
primaryClass={cs.CV cs.CL cs.LG}
} | chan2024hatformer: |
arxiv-664891 | 2410.02182 | BadCM: Invisible Backdoor Attack Against Cross-Modal Learning | <|reference_start|>BadCM: Invisible Backdoor Attack Against Cross-Modal Learning: Despite remarkable successes in unimodal learning tasks, backdoor attacks against cross-modal learning are still underexplored due to the limited generalization and inferior stealthiness when involving multiple modalities. Notably, since works in this area mainly inherit ideas from unimodal visual attacks, they struggle with dealing with diverse cross-modal attack circumstances and manipulating imperceptible trigger samples, which hinders their practicability in real-world applications. In this paper, we introduce a novel bilateral backdoor to fill in the missing pieces of the puzzle in the cross-modal backdoor and propose a generalized invisible backdoor framework against cross-modal learning (BadCM). Specifically, a cross-modal mining scheme is developed to capture the modality-invariant components as target poisoning areas, where well-designed trigger patterns injected into these regions can be efficiently recognized by the victim models. This strategy is adapted to different image-text cross-modal models, making our framework available to various attack scenarios. Furthermore, for generating poisoned samples of high stealthiness, we conceive modality-specific generators for visual and linguistic modalities that facilitate hiding explicit trigger patterns in modality-invariant regions. To the best of our knowledge, BadCM is the first invisible backdoor method deliberately designed for diverse cross-modal attacks within one unified framework. Comprehensive experimental evaluations on two typical applications, i.e., cross-modal retrieval and VQA, demonstrate the effectiveness and generalization of our method under multiple kinds of attack scenarios. Moreover, we show that BadCM can robustly evade existing backdoor defenses. Our code is available at https://github.com/xandery-geek/BadCM.<|reference_end|> | arxiv | @article{zhang2024badcm:,
title={BadCM: Invisible Backdoor Attack Against Cross-Modal Learning},
author={Zheng Zhang, Xu Yuan, Lei Zhu, Jingkuan Song and Liqiang Nie},
journal={IEEE Transactions on Image Processing, vol. 33, pp. 2558-2571,
2024},
year={2024},
doi={10.1109/TIP.2024.3378918},
archivePrefix={arXiv},
eprint={2410.02182},
primaryClass={cs.CV cs.CR cs.LG cs.MM}
} | zhang2024badcm: |
arxiv-664892 | 2410.02184 | CodeJudge: Evaluating Code Generation with Large Language Models | <|reference_start|>CodeJudge: Evaluating Code Generation with Large Language Models: Large Language Models (LLMs) have shown promising performance in code generation. However, how to reliably evaluate code generated by LLMs remains an unresolved problem. This paper presents CodeJudge, a code evaluation framework that leverages LLMs to evaluate the semantic correctness of generated code without the need for test cases. We investigate different ways to guide the LLM in performing "slow thinking" to arrive at an in-depth and reliable evaluation. We experimented with four LLMs as evaluators on four code generation datasets and five programming languages. The results show that CodeJudge significantly outperformed existing methods in most settings. Furthermore, compared with a SOTA GPT-3.5-based code evaluation method, CodeJudge achieved better results even when using a much smaller model, Llama-3-8B-Instruct. Our code and datasets are available on GitHub https://github.com/VichyTong/CodeJudge.<|reference_end|> | arxiv | @article{tong2024codejudge:,
title={CodeJudge: Evaluating Code Generation with Large Language Models},
author={Weixi Tong, Tianyi Zhang},
journal={arXiv preprint arXiv:2410.02184},
year={2024},
archivePrefix={arXiv},
eprint={2410.02184},
primaryClass={cs.LG cs.CL cs.SE}
} | tong2024codejudge: |
arxiv-664893 | 2410.02185 | POSIX: A Prompt Sensitivity Index For Large Language Models | <|reference_start|>POSIX: A Prompt Sensitivity Index For Large Language Models: Despite their remarkable capabilities, Large Language Models (LLMs) are found to be surprisingly sensitive to minor variations in prompts, often generating significantly divergent outputs in response to minor variations in the prompts, such as spelling errors, alteration of wording or the prompt template. However, while assessing the quality of an LLM, the focus often tends to be solely on its performance on downstream tasks, while very little to no attention is paid to prompt sensitivity. To fill this gap, we propose POSIX - a novel PrOmpt Sensitivity IndeX as a reliable measure of prompt sensitivity, thereby offering a more comprehensive evaluation of LLM performance. The key idea behind POSIX is to capture the relative change in loglikelihood of a given response upon replacing the corresponding prompt with a different intent-preserving prompt. We provide thorough empirical evidence demonstrating the efficacy of POSIX in capturing prompt sensitivity and subsequently use it to measure and thereby compare prompt sensitivity of various open-source LLMs. We find that merely increasing the parameter count or instruction tuning does not necessarily reduce prompt sensitivity whereas adding some few-shot exemplars, even just one, almost always leads to significant decrease in prompt sensitivity. We also find that alterations to prompt template lead to the highest sensitivity in the case of MCQ type tasks, whereas paraphrasing results in the highest sensitivity in open-ended generation tasks. The code for reproducing our results is open-sourced at https://github.com/kowndinya-renduchintala/POSIX.<|reference_end|> | arxiv | @article{chatterjee2024posix:,
title={POSIX: A Prompt Sensitivity Index For Large Language Models},
author={Anwoy Chatterjee, H S V N S Kowndinya Renduchintala, Sumit Bhatia,
Tanmoy Chakraborty},
journal={arXiv preprint arXiv:2410.02185},
year={2024},
archivePrefix={arXiv},
eprint={2410.02185},
primaryClass={cs.CL cs.AI cs.LG}
} | chatterjee2024posix: |
arxiv-664894 | 2410.02189 | Agent-Oriented Planning in Multi-Agent Systems | <|reference_start|>Agent-Oriented Planning in Multi-Agent Systems: Through the collaboration of multiple agents possessing diverse expertise and tools, multi-agent systems achieve impressive progress in solving real-world problems. Given the user queries, the meta-agents, serving as the brain within these systems, are required to decompose the queries into multiple sub-tasks that can be allocated to suitable agents capable of solving them, so-called agent-oriented planning. In this study, we identify three critical design principles of agent-oriented planning, including solvability, completeness, and non-redundancy, to ensure that each sub-task is effectively resolved, leading to satisfactory responses to the original queries. These principles further inspire us to propose a novel framework for agent-oriented planning in multi-agent systems, leveraging a fast task decomposition and allocation process followed by an effective and efficient evaluation via a reward model. During the planning process, the meta-agent is also responsible for evaluating the performance of the expert agents, making timely adjustments to the sub-tasks and scheduling as necessary. Besides, we integrate a feedback loop into the proposed framework to further enhance the effectiveness and robustness of such a problem-solving process. Extensive experiments demonstrate the advancement of the proposed framework in solving real-world problems compared to both single-agent systems and existing planning strategies for multi-agent systems.<|reference_end|> | arxiv | @article{li2024agent-oriented,
title={Agent-Oriented Planning in Multi-Agent Systems},
author={Ao Li, Yuexiang Xie, Songze Li, Fugee Tsung, Bolin Ding, Yaliang Li},
journal={arXiv preprint arXiv:2410.02189},
year={2024},
archivePrefix={arXiv},
eprint={2410.02189},
primaryClass={cs.AI cs.LG cs.MA}
} | li2024agent-oriented |
arxiv-664895 | 2410.02191 | A Survey on Point-of-Interest Recommendation: Models, Architectures, and Security | <|reference_start|>A Survey on Point-of-Interest Recommendation: Models, Architectures, and Security: The widespread adoption of smartphones and Location-Based Social Networks has led to a massive influx of spatio-temporal data, creating unparalleled opportunities for enhancing Point-of-Interest (POI) recommendation systems. These advanced POI systems are crucial for enriching user experiences, enabling personalized interactions, and optimizing decision-making processes in the digital landscape. However, existing surveys tend to focus on traditional approaches and few of them delve into cutting-edge developments, emerging architectures, as well as security considerations in POI recommendations. To address this gap, our survey stands out by offering a comprehensive, up-to-date review of POI recommendation systems, covering advancements in models, architectures, and security aspects. We systematically examine the transition from traditional models to advanced techniques such as large language models. Additionally, we explore the architectural evolution from centralized to decentralized and federated learning systems, highlighting the improvements in scalability and privacy. Furthermore, we address the increasing importance of security, examining potential vulnerabilities and privacy-preserving approaches. Our taxonomy provides a structured overview of the current state of POI recommendation, while we also identify promising directions for future research in this rapidly advancing field.<|reference_end|> | arxiv | @article{zhang2024a,
title={A Survey on Point-of-Interest Recommendation: Models, Architectures, and
Security},
author={Qianru Zhang, Peng Yang, Junliang Yu, Haixin Wang, Xingwei He,
Siu-Ming Yiu, Hongzhi Yin},
journal={arXiv preprint arXiv:2410.02191},
year={2024},
number={20 pages},
archivePrefix={arXiv},
eprint={2410.02191},
primaryClass={cs.IR cs.AI cs.CE cs.LG}
} | zhang2024a |
arxiv-664896 | 2410.02193 | Guiding Long-Horizon Task and Motion Planning with Vision Language Models | <|reference_start|>Guiding Long-Horizon Task and Motion Planning with Vision Language Models: Vision-Language Models (VLM) can generate plausible high-level plans when prompted with a goal, the context, an image of the scene, and any planning constraints. However, there is no guarantee that the predicted actions are geometrically and kinematically feasible for a particular robot embodiment. As a result, many prerequisite steps such as opening drawers to access objects are often omitted in their plans. Robot task and motion planners can generate motion trajectories that respect the geometric feasibility of actions and insert physically necessary actions, but do not scale to everyday problems that require common-sense knowledge and involve large state spaces comprised of many variables. We propose VLM-TAMP, a hierarchical planning algorithm that leverages a VLM to generate goth semantically-meaningful and horizon-reducing intermediate subgoals that guide a task and motion planner. When a subgoal or action cannot be refined, the VLM is queried again for replanning. We evaluate VLM- TAMP on kitchen tasks where a robot must accomplish cooking goals that require performing 30-50 actions in sequence and interacting with up to 21 objects. VLM-TAMP substantially outperforms baselines that rigidly and independently execute VLM-generated action sequences, both in terms of success rates (50 to 100% versus 0%) and average task completion percentage (72 to 100% versus 15 to 45%). See project site https://zt-yang.github.io/vlm-tamp-robot/ for more information.<|reference_end|> | arxiv | @article{yang2024guiding,
title={Guiding Long-Horizon Task and Motion Planning with Vision Language
Models},
author={Zhutian Yang, Caelan Garrett, Dieter Fox, Tom'as Lozano-P'erez,
Leslie Pack Kaelbling},
journal={arXiv preprint arXiv:2410.02193},
year={2024},
archivePrefix={arXiv},
eprint={2410.02193},
primaryClass={cs.RO}
} | yang2024guiding |
arxiv-664897 | 2410.02195 | BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting | <|reference_start|>BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting: Multivariate Time Series (MTS) forecasting is a fundamental task with numerous real-world applications, such as transportation, climate, and epidemiology. While a myriad of powerful deep learning models have been developed for this task, few works have explored the robustness of MTS forecasting models to malicious attacks, which is crucial for their trustworthy employment in high-stake scenarios. To address this gap, we dive deep into the backdoor attacks on MTS forecasting models and propose an effective attack method named BackTime.By subtly injecting a few stealthy triggers into the MTS data, BackTime can alter the predictions of the forecasting model according to the attacker's intent. Specifically, BackTime first identifies vulnerable timestamps in the data for poisoning, and then adaptively synthesizes stealthy and effective triggers by solving a bi-level optimization problem with a GNN-based trigger generator. Extensive experiments across multiple datasets and state-of-the-art MTS forecasting models demonstrate the effectiveness, versatility, and stealthiness of \method{} attacks. The code is available at \url{https://github.com/xiaolin-cs/BackTime}.<|reference_end|> | arxiv | @article{lin2024backtime:,
title={BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting},
author={Xiao Lin, Zhining Liu, Dongqi Fu, Ruizhong Qiu, Hanghang Tong},
journal={arXiv preprint arXiv:2410.02195},
year={2024},
archivePrefix={arXiv},
eprint={2410.02195},
primaryClass={cs.LG cs.AI cs.CR}
} | lin2024backtime: |
arxiv-664898 | 2410.02197 | General Preference Modeling with Preference Representations for Aligning Language Models | <|reference_start|>General Preference Modeling with Preference Representations for Aligning Language Models: Modeling human preferences is crucial for aligning foundation models with human values. Traditional reward modeling methods, such as the Bradley-Terry (BT) reward model, fall short in expressiveness, particularly in addressing intransitive preferences. Although supervised pair preference models (PairPM) can express general preferences, their implementation is highly ad-hoc and cannot guarantee a consistent preference probability of compared pairs. Additionally, they impose high computational costs due to their quadratic query complexity when comparing multiple responses. In this paper, we introduce preference representation learning, an approach that embeds responses into a latent space to capture intricate preference structures efficiently, achieving linear query complexity. Additionally, we propose preference score-based General Preference Optimization (GPO), which generalizes reward-based reinforcement learning from human feedback. Experimental results show that our General Preference representation model (GPM) outperforms the BT reward model on the RewardBench benchmark with a margin of up to 5.6% and effectively models cyclic preferences where any BT reward model behaves like a random guess. Furthermore, evaluations on downstream tasks such as AlpacaEval2.0 and MT-Bench, following the language model post-training with GPO and our general preference model, reveal substantial performance improvements with margins up to 9.3%. These findings indicate that our method may enhance the alignment of foundation models with nuanced human values. The code is available at https://github.com/general-preference/general-preference-model.<|reference_end|> | arxiv | @article{zhang2024general,
title={General Preference Modeling with Preference Representations for Aligning
Language Models},
author={Yifan Zhang, Ge Zhang, Yue Wu, Kangping Xu, Quanquan Gu},
journal={arXiv preprint arXiv:2410.02197},
year={2024},
archivePrefix={arXiv},
eprint={2410.02197},
primaryClass={cs.AI cs.CL cs.LG}
} | zhang2024general |
arxiv-664899 | 2410.02198 | G2T-LLM: Graph-to-Tree Text Encoding for Molecule Generation with Fine-Tuned Large Language Models | <|reference_start|>G2T-LLM: Graph-to-Tree Text Encoding for Molecule Generation with Fine-Tuned Large Language Models: We introduce G2T-LLM, a novel approach for molecule generation that uses graph-to-tree text encoding to transform graph-based molecular structures into a hierarchical text format optimized for large language models (LLMs). This encoding converts complex molecular graphs into tree-structured formats, such as JSON and XML, which LLMs are particularly adept at processing due to their extensive pre-training on these types of data. By leveraging the flexibility of LLMs, our approach allows for intuitive interaction using natural language prompts, providing a more accessible interface for molecular design. Through supervised fine-tuning, G2T-LLM generates valid and coherent chemical structures, addressing common challenges like invalid outputs seen in traditional graph-based methods. While LLMs are computationally intensive, they offer superior generalization and adaptability, enabling the generation of diverse molecular structures with minimal task-specific customization. The proposed approach achieved comparable performances with state-of-the-art methods on various benchmark molecular generation datasets, demonstrating its potential as a flexible and innovative tool for AI-driven molecular design.<|reference_end|> | arxiv | @article{yu2024g2t-llm:,
title={G2T-LLM: Graph-to-Tree Text Encoding for Molecule Generation with
Fine-Tuned Large Language Models},
author={Zhaoning Yu, Xiangyang Xu, Hongyang Gao},
journal={arXiv preprint arXiv:2410.02198},
year={2024},
archivePrefix={arXiv},
eprint={2410.02198},
primaryClass={cs.LG cs.AI q-bio.QM}
} | yu2024g2t-llm: |
arxiv-664900 | 2410.02199 | Deep Koopman-layered Model with Universal Property Based on Toeplitz Matrices | <|reference_start|>Deep Koopman-layered Model with Universal Property Based on Toeplitz Matrices: We propose deep Koopman-layered models with learnable parameters in the form of Toeplitz matrices for analyzing the dynamics of time-series data. The proposed model has both theoretical solidness and flexibility. By virtue of the universal property of Toeplitz matrices and the reproducing property underlined in the model, we can show its universality and the generalization property. In addition, the flexibility of the proposed model enables the model to fit time-series data coming from nonautonomous dynamical systems. When training the model, we apply Krylov subspace methods for efficient computations. In addition, the proposed model can be regarded as a neural ODE-based model. In this sense, the proposed model establishes a new connection among Koopman operators, neural ODEs, and numerical linear algebraic methods.<|reference_end|> | arxiv | @article{hashimoto2024deep,
title={Deep Koopman-layered Model with Universal Property Based on Toeplitz
Matrices},
author={Yuka Hashimoto and Tomoharu Iwata},
journal={arXiv preprint arXiv:2410.02199},
year={2024},
archivePrefix={arXiv},
eprint={2410.02199},
primaryClass={cs.LG math.DS math.FA stat.ML}
} | hashimoto2024deep |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.