corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-667101
2410.06072
Training-free LLM-generated Text Detection by Mining Token Probability Sequences
<|reference_start|>Training-free LLM-generated Text Detection by Mining Token Probability Sequences: Large language models (LLMs) have demonstrated remarkable capabilities in generating high-quality texts across diverse domains. However, the potential misuse of LLMs has raised significant concerns, underscoring the urgent need for reliable detection of LLM-generated texts. Conventional training-based detectors often struggle with generalization, particularly in cross-domain and cross-model scenarios. In contrast, training-free methods, which focus on inherent discrepancies through carefully designed statistical features, offer improved generalization and interpretability. Despite this, existing training-free detection methods typically rely on global text sequence statistics, neglecting the modeling of local discriminative features, thereby limiting their detection efficacy. In this work, we introduce a novel training-free detector, termed \textbf{Lastde} that synergizes local and global statistics for enhanced detection. For the first time, we introduce time series analysis to LLM-generated text detection, capturing the temporal dynamics of token probability sequences. By integrating these local statistics with global ones, our detector reveals significant disparities between human and LLM-generated texts. We also propose an efficient alternative, \textbf{Lastde++} to enable real-time detection. Extensive experiments on six datasets involving cross-domain, cross-model, and cross-lingual detection scenarios, under both white-box and black-box settings, demonstrated that our method consistently achieves state-of-the-art performance. Furthermore, our approach exhibits greater robustness against paraphrasing attacks compared to existing baseline methods.<|reference_end|>
arxiv
@article{xu2024training-free, title={Training-free LLM-generated Text Detection by Mining Token Probability Sequences}, author={Yihuai Xu, Yongwei Wang, Yifei Bi, Huangsen Cao, Zhouhan Lin, Yu Zhao, Fei Wu}, journal={arXiv preprint arXiv:2410.06072}, year={2024}, archivePrefix={arXiv}, eprint={2410.06072}, primaryClass={cs.CL} }
xu2024training-free
arxiv-667102
2410.06074
Scalable Mechanistic Neural Networks
<|reference_start|>Scalable Mechanistic Neural Networks: We propose Scalable Mechanistic Neural Network (S-MNN), an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences. By reformulating the original Mechanistic Neural Network (MNN) (Pervez et al., 2024), we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear. This significant improvement enables efficient modeling of long-term dynamics without sacrificing accuracy or interpretability. Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources. Consequently, S-MNN can drop-in replace the original MNN in applications, providing a practical and efficient tool for integrating mechanistic bottlenecks into neural network models of complex dynamical systems.<|reference_end|>
arxiv
@article{chen2024scalable, title={Scalable Mechanistic Neural Networks}, author={Jiale Chen, Dingling Yao, Adeel Pervez, Dan Alistarh, Francesco Locatello}, journal={arXiv preprint arXiv:2410.06074}, year={2024}, archivePrefix={arXiv}, eprint={2410.06074}, primaryClass={cs.LG} }
chen2024scalable
arxiv-667103
2410.06079
Seepage analysis and control of the Sahand rockfill dam drainage using instrumental data
<|reference_start|>Seepage analysis and control of the Sahand rockfill dam drainage using instrumental data: The finite element method is an effective numerical method for accurate analysis of seepage that can determine the values of outlet flow and pore water pressures at any point of the body and the foundation. In the present study, the seepage analysis of Sahand Dam has been performed using finite element method and PLAXIS software and the dam has been validated by instrumental data's. After validation, permeability coefficients have been used to investigate the factors affecting seepage in the normal level of the reservoir such as cutoff wall, upstream concrete cover, clay blanket and the effect of foundation depth. Results show that the implementation of clay blanket with concrete cover of upstream together, causes a significant decrease in discharge of the dam. Finally, by investigating the effective parameters on the dam seepage, a complex optimum model was developed. In the mentioned model, discharge amount was decreased to one over third of the corresponding value of the initial dam model.<|reference_end|>
arxiv
@article{nikrou2024seepage, title={Seepage analysis and control of the Sahand rockfill dam drainage using instrumental data}, author={Parvaneh Nikrou, Sajjad Pirboudaghi}, journal={arXiv preprint arXiv:2410.06079}, year={2024}, archivePrefix={arXiv}, eprint={2410.06079}, primaryClass={math.NA cs.NA} }
nikrou2024seepage
arxiv-667104
2410.06080
Packing a Knapsack with Items Owned by Strategic Agents
<|reference_start|>Packing a Knapsack with Items Owned by Strategic Agents: This paper considers a scenario within the field of mechanism design without money where a mechanism designer is interested in selecting items with maximum total value under a knapsack constraint. The items, however, are controlled by strategic agents who aim to maximize the total value of their items in the knapsack. This is a natural setting, e.g., when agencies select projects for funding, companies select products for sale in their shops, or hospitals schedule MRI scans for the day. A mechanism governing the packing of the knapsack is strategyproof if no agent can benefit from hiding items controlled by them to the mechanism. We are interested in mechanisms that are strategyproof and $\alpha$-approximate in the sense that they always approximate the maximum value of the knapsack by a factor of $\alpha \in [0,1]$. First, we give a deterministic mechanism that is $\frac{1}{3}$-approximate. For the special case where all items have unit density, we design a $\frac{1}{\phi}$-approximate mechanism where $1/\phi \approx 0.618$ is the inverse of the golden ratio. This result is tight as we show that no deterministic strategyproof mechanism with a better approximation exists. We further give randomized mechanisms with approximation guarantees of $1/2$ for the general case and $2/3$ for the case of unit densities. For both cases, no strategyproof mechanism can achieve an approximation guarantee better than $1/(5\phi -7)\approx 0.917$.<|reference_end|>
arxiv
@article{cembrano2024packing, title={Packing a Knapsack with Items Owned by Strategic Agents}, author={Javier Cembrano, Max Klimm, Martin Knaack}, journal={arXiv preprint arXiv:2410.06080}, year={2024}, archivePrefix={arXiv}, eprint={2410.06080}, primaryClass={cs.GT econ.TH math.OC} }
cembrano2024packing
arxiv-667105
2410.06083
Classification of simulation relations for symbolic control
<|reference_start|>Classification of simulation relations for symbolic control: Abstraction-based control design is a promising approach for ensuring safety-critical control of complex cyber-physical systems. A key aspect of this methodology is the relation between the original and abstract systems, which ensures that the abstract controller can be transformed into a valid controller for the original system through a concretization procedure. In this paper, we provide a comprehensive and systematic framework that characterizes various simulation relations, through their associated concretization procedures. We introduce the concept of augmented system, which universally enables a feedback refinement relation with the abstract system. This augmented system encapsulates the specific characteristics of each simulation relation within an interface, enabling a plug-and-play control architecture. Our results demonstrate that the existence of a particular simulation relation between the concrete and abstract systems is equivalent to the implementability of a specific control architecture, which depends on the considered simulation relation. This allows us to introduce new types of relations, and to establish the advantages and drawbacks of different relations, which we exhibit through detailed examples.<|reference_end|>
arxiv
@article{calbert2024classification, title={Classification of simulation relations for symbolic control}, author={Julien Calbert, Antoine Girard, Rapha"el M. Jungers}, journal={arXiv preprint arXiv:2410.06083}, year={2024}, archivePrefix={arXiv}, eprint={2410.06083}, primaryClass={eess.SY cs.SY math.DS} }
calbert2024classification
arxiv-667106
2410.06084
Diversity-Rewarded CFG Distillation
<|reference_start|>Diversity-Rewarded CFG Distillation: Generative models are transforming creative domains such as music generation, with inference-time strategies like Classifier-Free Guidance (CFG) playing a crucial role. However, CFG doubles inference cost while limiting originality and diversity across generated contents. In this paper, we introduce diversity-rewarded CFG distillation, a novel finetuning procedure that distills the strengths of CFG while addressing its limitations. Our approach optimises two training objectives: (1) a distillation objective, encouraging the model alone (without CFG) to imitate the CFG-augmented predictions, and (2) an RL objective with a diversity reward, promoting the generation of diverse outputs for a given prompt. By finetuning, we learn model weights with the ability to generate high-quality and diverse outputs, without any inference overhead. This also unlocks the potential of weight-based model merging strategies: by interpolating between the weights of two models (the first focusing on quality, the second on diversity), we can control the quality-diversity trade-off at deployment time, and even further boost performance. We conduct extensive experiments on the MusicLM (Agostinelli et al., 2023) text-to-music generative model, where our approach surpasses CFG in terms of quality-diversity Pareto optimality. According to human evaluators, our finetuned-then-merged model generates samples with higher quality-diversity than the base model augmented with CFG. Explore our generations at https://google-research.github.io/seanet/musiclm/diverse_music/.<|reference_end|>
arxiv
@article{cideron2024diversity-rewarded, title={Diversity-Rewarded CFG Distillation}, author={Geoffrey Cideron, Andrea Agostinelli, Johan Ferret, Sertan Girgin, Romuald Elie, Olivier Bachem, Sarah Perrin, Alexandre Ram'e}, journal={arXiv preprint arXiv:2410.06084}, year={2024}, archivePrefix={arXiv}, eprint={2410.06084}, primaryClass={cs.LG} }
cideron2024diversity-rewarded
arxiv-667107
2410.06086
The GDPR's Rules on Data Breaches: Analysing Their Rationales and Effects
<|reference_start|>The GDPR's Rules on Data Breaches: Analysing Their Rationales and Effects: The General Data Protection Regulation (GDPR) requires an organisation that suffers a data breach to notify the competent Data Protection Authority. The organisation must also inform the relevant individuals, when a data breach threatens their rights and freedoms. This paper focuses on the following question: given the goals of the GDPR's data breach notification obligation, and we assess the obligation in the light of those goals. We refer to insights from information security and economics, and present them in a reader-friendly way for lawyers. Our main conclusion is that the GDPR's data breach rules are likely to contribute to the goals. For instance, the data breach notification obligation can nudge organisations towards better security; such an obligation enables regulators to perform their duties; and such an obligation improves transparency and accountability. However, the paper also warns that we should not have unrealistic expectations of the possibilities for people to protect their interests after a data breach notice. Likewise, we should not have high expectations of people switching to other service providers after receiving a data breach notification. Lastly, the paper calls for Data Protection Authorities to publish more information about reported data breaches. Such information can help to analyse security threats.<|reference_end|>
arxiv
@article{borgesius2024the, title={The GDPR's Rules on Data Breaches: Analysing Their Rationales and Effects}, author={Frederik Zuiderveen Borgesius, Hadi Asghari, No"el Bangma, Jaap-Henk Hoepman}, journal={arXiv preprint arXiv:2410.06086}, year={2024}, archivePrefix={arXiv}, eprint={2410.06086}, primaryClass={cs.CR cs.CY} }
borgesius2024the
arxiv-667108
2410.06089
TOWER: Tree Organized Weighting for Evaluating Complex Instructions
<|reference_start|>TOWER: Tree Organized Weighting for Evaluating Complex Instructions: Evaluating the ability of large language models (LLMs) to follow complex human-written instructions is essential for their deployment in real-world applications. While benchmarks like Chatbot Arena use human judges to assess model performance, they are resource-intensive and time-consuming. Alternative methods using LLMs as judges, such as AlpacaEval, MT Bench, WildBench, and InFoBench offer improvements but still do not capture that certain complex instruction aspects are more important than others to follow. To address this gap, we propose a novel evaluation metric, \textsc{TOWER}, that incorporates human-judged importance into the assessment of complex instruction following. We show that human annotators agree with tree-based representations of these complex instructions nearly as much as they agree with other human annotators. We release tree-based annotations of the InFoBench dataset and the corresponding evaluation code to facilitate future research.<|reference_end|>
arxiv
@article{ziems2024tower:, title={TOWER: Tree Organized Weighting for Evaluating Complex Instructions}, author={Noah Ziems, Zhihan Zhang, Meng Jiang}, journal={arXiv preprint arXiv:2410.06089}, year={2024}, archivePrefix={arXiv}, eprint={2410.06089}, primaryClass={cs.CL cs.AI} }
ziems2024tower:
arxiv-667109
2410.06094
Listen to the Patient: Enhancing Medical Dialogue Generation with Patient Hallucination Detection and Mitigation
<|reference_start|>Listen to the Patient: Enhancing Medical Dialogue Generation with Patient Hallucination Detection and Mitigation: Medical dialogue systems aim to provide medical services through patient-agent conversations. Previous methods typically regard patients as ideal users, focusing mainly on common challenges in dialogue systems, while neglecting the potential biases or misconceptions that might be introduced by real patients, who are typically non-experts. This study investigates the discrepancy between patients' expressions during medical consultations and their actual health conditions, defined as patient hallucination. Such phenomena often arise from patients' lack of knowledge and comprehension, concerns, and anxieties, resulting in the transmission of inaccurate or wrong information during consultations. To address this issue, we propose MedPH, a Medical dialogue generation method for mitigating the problem of Patient Hallucinations designed to detect and cope with hallucinations. MedPH incorporates a detection method that utilizes one-dimensional structural entropy over a temporal dialogue entity graph, and a mitigation strategy based on hallucination-related information to guide patients in expressing their actual conditions. Experimental results indicate the high effectiveness of MedPH when compared to existing approaches in both medical entity prediction and response generation tasks, while also demonstrating its effectiveness in mitigating hallucinations within interactive scenarios.<|reference_end|>
arxiv
@article{qin2024listen, title={Listen to the Patient: Enhancing Medical Dialogue Generation with Patient Hallucination Detection and Mitigation}, author={Lang Qin, Yao Zhang, Hongru Liang, Adam Jatowt, Zhenglu Yang}, journal={arXiv preprint arXiv:2410.06094}, year={2024}, archivePrefix={arXiv}, eprint={2410.06094}, primaryClass={cs.CL} }
qin2024listen
arxiv-667110
2410.06095
Smoothed analysis for graph isomorphism
<|reference_start|>Smoothed analysis for graph isomorphism: There is no known polynomial-time algorithm for graph isomorphism testing, but elementary combinatorial "refinement" algorithms seem to be very efficient in practice. Some philosophical justification is provided by a classical theorem of Babai, Erd\H{o}s and Selkow: an extremely simple polynomial-time combinatorial algorithm (variously known as "na\"ive refinement", "na\"ive vertex classification", "colour refinement" or the "1-dimensional Weisfeiler-Leman algorithm") yields a so-called canonical labelling scheme for "almost all graphs". More precisely, for a typical outcome of a random graph $G(n,1/2)$, this simple combinatorial algorithm assigns labels to vertices in a way that easily permits isomorphism-testing against any other graph. We improve the Babai-Erd\H{o}s-Selkow theorem in two directions. First, we consider randomly perturbed graphs, in accordance with the smoothed analysis philosophy of Spielman and Teng: for any graph $G$, na\"ive refinement becomes effective after a tiny random perturbation to $G$ (specifically, the addition and removal of $O(n\log n)$ random edges). Actually, with a twist on na\"ive refinement, we show that $O(n)$ random additions and removals suffice. These results significantly improve on previous work of Gaudio-R\'acz-Sridhar, and are in certain senses best-possible. Second, we complete a long line of research on canonical labelling of random graphs: for any $p$ (possibly depending on $n$), we prove that a random graph $G(n,p)$ can typically be canonically labelled in polynomial time. This is most interesting in the extremely sparse regime where $p$ has order of magnitude $c/n$; denser regimes were previously handled by Bollob\'as, Czajka-Pandurangan, and Linial-Mosheiff. Our proof also provides a description of the automorphism group of a typical outcome of $G(n,p_n)$ (slightly correcting a prediction of Linial-Mosheiff).<|reference_end|>
arxiv
@article{anastos2024smoothed, title={Smoothed analysis for graph isomorphism}, author={Michael Anastos, Matthew Kwan and Benjamin Moore}, journal={arXiv preprint arXiv:2410.06095}, year={2024}, archivePrefix={arXiv}, eprint={2410.06095}, primaryClass={math.CO cs.CC cs.DS} }
anastos2024smoothed
arxiv-667111
2410.06097
Decoding Decoded: Understanding Hyperparameter Effects in Open-Ended Text Generation
<|reference_start|>Decoding Decoded: Understanding Hyperparameter Effects in Open-Ended Text Generation: Decoding strategies for large language models (LLMs) are a critical but often underexplored aspect of text generation tasks. Since LLMs produce probability distributions over the entire vocabulary, various decoding methods have been developed to transform these probabilities into coherent and fluent text, each with its own set of hyperparameters. In this study, we present a large-scale, comprehensive analysis of how hyperparameter selection affects text quality in open-ended text generation across multiple LLMs, datasets, and evaluation metrics. Through an extensive sensitivity analysis, we provide practical guidelines for hyperparameter tuning and demonstrate the substantial influence of these choices on text quality. Using three established datasets, spanning factual domains (e.g., news) and creative domains (e.g., fiction), we show that hyperparameter tuning significantly impacts generation quality, though its effects vary across models and tasks. We offer in-depth insights into these effects, supported by both human evaluations and a synthesis of widely-used automatic evaluation metrics.<|reference_end|>
arxiv
@article{arias2024decoding, title={Decoding Decoded: Understanding Hyperparameter Effects in Open-Ended Text Generation}, author={Esteban Garces Arias and Meimingwei Li and Christian Heumann and Matthias A{ss}enmacher}, journal={arXiv preprint arXiv:2410.06097}, year={2024}, archivePrefix={arXiv}, eprint={2410.06097}, primaryClass={cs.CL cs.LG} }
arias2024decoding
arxiv-667112
2410.06101
Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement Learning
<|reference_start|>Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement Learning: Reinforcement learning (RL) has emerged as a pivotal technique for fine-tuning large language models (LLMs) on specific tasks. However, prevailing RL fine-tuning methods predominantly rely on PPO and its variants. Though these algorithms are effective in general RL settings, they often exhibit suboptimal performance and vulnerability to distribution collapse when applied to the fine-tuning of LLMs. In this paper, we propose CORY, extending the RL fine-tuning of LLMs to a sequential cooperative multi-agent reinforcement learning framework, to leverage the inherent coevolution and emergent capabilities of multi-agent systems. In CORY, the LLM to be fine-tuned is initially duplicated into two autonomous agents: a pioneer and an observer. The pioneer generates responses based on queries, while the observer generates responses using both the queries and the pioneer's responses. The two agents are trained together. During training, the agents exchange roles periodically, fostering cooperation and coevolution between them. Experiments evaluate CORY's performance by fine-tuning GPT-2 and Llama-2 under subjective and objective reward functions on the IMDB Review and GSM8K datasets, respectively. Results show that CORY outperforms PPO in terms of policy optimality, resistance to distribution collapse, and training robustness, thereby underscoring its potential as a superior methodology for refining LLMs in real-world applications.<|reference_end|>
arxiv
@article{ma2024coevolving, title={Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement Learning}, author={Hao Ma, Tianyi Hu, Zhiqiang Pu, Boyin Liu, Xiaolin Ai, Yanyan Liang, Min Chen}, journal={arXiv preprint arXiv:2410.06101}, year={2024}, archivePrefix={arXiv}, eprint={2410.06101}, primaryClass={cs.AI cs.MA} }
ma2024coevolving
arxiv-667113
2410.06104
RefineStyle: Dynamic Convolution Refinement for StyleGAN
<|reference_start|>RefineStyle: Dynamic Convolution Refinement for StyleGAN: In StyleGAN, convolution kernels are shaped by both static parameters shared across images and dynamic modulation factors $w^+\in\mathcal{W}^+$ specific to each image. Therefore, $\mathcal{W}^+$ space is often used for image inversion and editing. However, pre-trained model struggles with synthesizing out-of-domain images due to the limited capabilities of $\mathcal{W}^+$ and its resultant kernels, necessitating full fine-tuning or adaptation through a complex hypernetwork. This paper proposes an efficient refining strategy for dynamic kernels. The key idea is to modify kernels by low-rank residuals, learned from input image or domain guidance. These residuals are generated by matrix multiplication between two sets of tokens with the same number, which controls the complexity. We validate the refining scheme in image inversion and domain adaptation. In the former task, we design grouped transformer blocks to learn these token sets by one- or two-stage training. In the latter task, token sets are directly optimized to support synthesis in the target domain while preserving original content. Extensive experiments show that our method achieves low distortions for image inversion and high quality for out-of-domain editing.<|reference_end|>
arxiv
@article{xia2024refinestyle:, title={RefineStyle: Dynamic Convolution Refinement for StyleGAN}, author={Siwei Xia, Xueqi Hu, Li Sun, Qingli Li}, journal={arXiv preprint arXiv:2410.06104}, year={2024}, archivePrefix={arXiv}, eprint={2410.06104}, primaryClass={cs.CV} }
xia2024refinestyle:
arxiv-667114
2410.06106
Distributed Tomographic Reconstruction with Quantization
<|reference_start|>Distributed Tomographic Reconstruction with Quantization: Conventional tomographic reconstruction typically depends on centralized servers for both data storage and computation, leading to concerns about memory limitations and data privacy. Distributed reconstruction algorithms mitigate these issues by partitioning data across multiple nodes, reducing server load and enhancing privacy. However, these algorithms often encounter challenges related to memory constraints and communication overhead between nodes. In this paper, we introduce a decentralized Alternating Directions Method of Multipliers (ADMM) with configurable quantization. By distributing local objectives across nodes, our approach is highly scalable and can efficiently reconstruct images while adapting to available resources. To overcome communication bottlenecks, we propose two quantization techniques based on K-means clustering and JPEG compression. Numerical experiments with benchmark images illustrate the tradeoffs between communication efficiency, memory use, and reconstruction accuracy.<|reference_end|>
arxiv
@article{miao2024distributed, title={Distributed Tomographic Reconstruction with Quantization}, author={Runxuan Miao, Selin Aslan, Erdem Koyuncu, Dou{g}a G"ursoy}, journal={arXiv preprint arXiv:2410.06106}, year={2024}, archivePrefix={arXiv}, eprint={2410.06106}, primaryClass={cs.DC} }
miao2024distributed
arxiv-667115
2410.06107
Towards AI-Native Software Engineering (SE 30): A Vision and a Challenge Roadmap
<|reference_start|>Towards AI-Native Software Engineering (SE 30): A Vision and a Challenge Roadmap: The rise of AI-assisted software engineering (SE 2.0), powered by Foundation Models (FMs) and FM-powered copilots, has shown promise in improving developer productivity. However, it has also exposed inherent limitations, such as cognitive overload on developers and inefficiencies. We propose a shift towards Software Engineering 3.0 (SE 3.0), an AI-native approach characterized by intent-first, conversation-oriented development between human developers and AI teammates. SE 3.0 envisions AI systems evolving beyond task-driven copilots into intelligent collaborators, capable of deeply understanding and reasoning about software engineering principles and intents. We outline the key components of the SE 3.0 technology stack, which includes Teammate.next for adaptive and personalized AI partnership, IDE.next for intent-first conversation-oriented development, Compiler.next for multi-objective code synthesis, and Runtime.next for SLA-aware execution with edge-computing support. Our vision addresses the inefficiencies and cognitive strain of SE 2.0 by fostering a symbiotic relationship between human developers and AI, maximizing their complementary strengths. We also present a roadmap of challenges that must be overcome to realize our vision of SE 3.0. This paper lays the foundation for future discussions on the role of AI in the next era of software engineering.<|reference_end|>
arxiv
@article{hassan2024towards, title={Towards AI-Native Software Engineering (SE 3.0): A Vision and a Challenge Roadmap}, author={Ahmed E. Hassan, Gustavo A. Oliva, Dayi Lin, Boyuan Chen, Zhen Ming (Jack) Jiang}, journal={arXiv preprint arXiv:2410.06107}, year={2024}, archivePrefix={arXiv}, eprint={2410.06107}, primaryClass={cs.SE cs.AI} }
hassan2024towards
arxiv-667116
2410.06108
ConceptAgent: LLM-Driven Precondition Grounding and Tree Search for Robust Task Planning and Execution
<|reference_start|>ConceptAgent: LLM-Driven Precondition Grounding and Tree Search for Robust Task Planning and Execution: Robotic planning and execution in open-world environments is a complex problem due to the vast state spaces and high variability of task embodiment. Recent advances in perception algorithms, combined with Large Language Models (LLMs) for planning, offer promising solutions to these challenges, as the common sense reasoning capabilities of LLMs provide a strong heuristic for efficiently searching the action space. However, prior work fails to address the possibility of hallucinations from LLMs, which results in failures to execute the planned actions largely due to logical fallacies at high- or low-levels. To contend with automation failure due to such hallucinations, we introduce ConceptAgent, a natural language-driven robotic platform designed for task execution in unstructured environments. With a focus on scalability and reliability of LLM-based planning in complex state and action spaces, we present innovations designed to limit these shortcomings, including 1) Predicate Grounding to prevent and recover from infeasible actions, and 2) an embodied version of LLM-guided Monte Carlo Tree Search with self reflection. In simulation experiments, ConceptAgent achieved a 19% task completion rate across three room layouts and 30 easy level embodied tasks outperforming other state-of-the-art LLM-driven reasoning baselines that scored 10.26% and 8.11% on the same benchmark. Additionally, ablation studies on moderate to hard embodied tasks revealed a 20% increase in task completion from the baseline agent to the fully enhanced ConceptAgent, highlighting the individual and combined contributions of Predicate Grounding and LLM-guided Tree Search to enable more robust automation in complex state and action spaces.<|reference_end|>
arxiv
@article{rivera2024conceptagent:, title={ConceptAgent: LLM-Driven Precondition Grounding and Tree Search for Robust Task Planning and Execution}, author={Corban Rivera, Grayson Byrd, William Paul, Tyler Feldman, Meghan Booker, Emma Holmes, David Handelman, Bethany Kemp, Andrew Badger, Aurora Schmidt, Krishna Murthy Jatavallabhula, Celso M de Melo, Lalithkumar Seenivasan, Mathias Unberath, Rama Chellappa}, journal={arXiv preprint arXiv:2410.06108}, year={2024}, archivePrefix={arXiv}, eprint={2410.06108}, primaryClass={cs.AI} }
rivera2024conceptagent:
arxiv-667117
2410.06109
Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition
<|reference_start|>Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition: Long-tailed semi-supervised learning poses a significant challenge in training models with limited labeled data exhibiting a long-tailed label distribution. Current state-of-the-art LTSSL approaches heavily rely on high-quality pseudo-labels for large-scale unlabeled data. However, these methods often neglect the impact of representations learned by the neural network and struggle with real-world unlabeled data, which typically follows a different distribution than labeled data. This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning. Our framework derives the class-balanced contrastive loss through Gaussian kernel density estimation. We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels. By progressively estimating the underlying label distribution and optimizing its alignment with model predictions, we tackle the diverse distribution of unlabeled data in real-world scenarios. Extensive experiments across multiple datasets with varying unlabeled data distributions demonstrate that CCL consistently outperforms prior state-of-the-art methods, achieving over 4% improvement on the ImageNet-127 dataset. Our source code is available at https://github.com/zhouzihao11/CCL<|reference_end|>
arxiv
@article{zhou2024continuous, title={Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition}, author={Zi-Hao Zhou and Siyuan Fang and Zi-Jing Zhou and Tong Wei and Yuanyu Wan and Min-Ling Zhang}, journal={arXiv preprint arXiv:2410.06109}, year={2024}, archivePrefix={arXiv}, eprint={2410.06109}, primaryClass={cs.LG} }
zhou2024continuous
arxiv-667118
2410.06112
SwiftQueue: Optimizing Low-Latency Applications with Swift Packet Queuing
<|reference_start|>SwiftQueue: Optimizing Low-Latency Applications with Swift Packet Queuing: Low Latency, Low Loss, and Scalable Throughput (L4S), as an emerging router-queue management technique, has seen steady deployment in the industry. An L4S-enabled router assigns each packet to the queue based on the packet header marking. Currently, L4S employs per-flow queue selection, i.e. all packets of a flow are marked the same way and thus use the same queues, even though each packet is marked separately. However, this may hurt tail latency and latency-sensitive applications because transient congestion and queue buildups may only affect a fraction of packets in a flow. We present SwiftQueue, a new L4S queue-selection strategy in which a sender uses a novel per-packet latency predictor to pinpoint which packets likely have latency spikes or drops. The insight is that many packet-level latency variations result from complex interactions among recent packets at shared router queues. Yet, these intricate packet-level latency patterns are hard to learn efficiently by traditional models. Instead, SwiftQueue uses a custom Transformer, which is well-studied for its expressiveness on sequential patterns, to predict the next packet's latency based on the latencies of recently received ACKs. Based on the predicted latency of each outgoing packet, SwiftQueue's sender dynamically marks the L4S packet header to assign packets to potentially different queues, even within the same flow. Using real network traces, we show that SwiftQueue is 45-65% more accurate in predicting latency and its variations than state-of-art methods. Based on its latency prediction, SwiftQueue reduces the tail latency for L4S-enabled flows by 36-45%, compared with the existing L4S queue-selection method.<|reference_end|>
arxiv
@article{ray2024swiftqueue:, title={SwiftQueue: Optimizing Low-Latency Applications with Swift Packet Queuing}, author={Siddhant Ray, Xi Jiang, Jack Luo, Nick Feamster, Junchen Jiang}, journal={arXiv preprint arXiv:2410.06112}, year={2024}, archivePrefix={arXiv}, eprint={2410.06112}, primaryClass={cs.NI cs.LG} }
ray2024swiftqueue:
arxiv-667119
2410.06113
RealityCraft: An In-Situ CAD+CAM Interface for Novices via Scene-Aware Augmented Reality
<|reference_start|>RealityCraft: An In-Situ CAD+CAM Interface for Novices via Scene-Aware Augmented Reality: Despite the growing accessibility of augmented reality (AR) for visualization, existing computer-aided design systems remain largely confined to traditional screens and are often inaccessible to novice users due to their complexity. We present RealityCraft, an open-sourced AR interface that enables in-situ computer-aided design and manufacturing (CAD+CAM) for novices. Unlike traditional CAD systems confined to computer screens, RealityCraft allows users to design directly within their physical environments, with primitive geometries. RealityCraft recognizes and utilizes physical constraints such as furniture and walls, enhancing user interaction through spatial awareness and depth occlusion. Furthermore, RealityCraft features an integrated AR-based 3D printing workflow, where users can drag and drop designs onto their 3D printer's virtual twin in their immediate space. Through a user study, we demonstrate that RealityCraft enhances engagement and ease of use for novices. By bridging the gap between digital creation and physical output, RealityCraft aims to transform everyday spaces into creative studios.<|reference_end|>
arxiv
@article{arslan2024realitycraft:, title={RealityCraft: An In-Situ CAD+CAM Interface for Novices via Scene-Aware Augmented Reality}, author={Ou{g}uz Arslan, Artun Akdou{g}an, Mustafa Doga Dogan}, journal={arXiv preprint arXiv:2410.06113}, year={2024}, archivePrefix={arXiv}, eprint={2410.06113}, primaryClass={cs.HC cs.ET cs.GR} }
arslan2024realitycraft:
arxiv-667120
2410.06114
UnSeGArmaNet: Unsupervised Image Segmentation using Graph Neural Networks with Convolutional ARMA Filters
<|reference_start|>UnSeGArmaNet: Unsupervised Image Segmentation using Graph Neural Networks with Convolutional ARMA Filters: The data-hungry approach of supervised classification drives the interest of the researchers toward unsupervised approaches, especially for problems such as medical image segmentation, where labeled data are difficult to get. Motivated by the recent success of Vision transformers (ViT) in various computer vision tasks, we propose an unsupervised segmentation framework with a pre-trained ViT. Moreover, by harnessing the graph structure inherent within the image, the proposed method achieves a notable performance in segmentation, especially in medical images. We further introduce a modularity-based loss function coupled with an Auto-Regressive Moving Average (ARMA) filter to capture the inherent graph topology within the image. Finally, we observe that employing Scaled Exponential Linear Unit (SELU) and SILU (Swish) activation functions within the proposed Graph Neural Network (GNN) architecture enhances the performance of segmentation. The proposed method provides state-of-the-art performance (even comparable to supervised methods) on benchmark image segmentation datasets such as ECSSD, DUTS, and CUB, as well as challenging medical image segmentation datasets such as KVASIR, CVC-ClinicDB, ISIC-2018. The github repository of the code is available on \url{https://github.com/ksgr5566/UnSeGArmaNet}.<|reference_end|>
arxiv
@article{reddy2024unsegarmanet:, title={UnSeGArmaNet: Unsupervised Image Segmentation using Graph Neural Networks with Convolutional ARMA Filters}, author={Kovvuri Sai Gopal Reddy, Bodduluri Saran, A. Mudit Adityaja, Saurabh J. Shigwan, Nitin Kumar, Snehasis Mukherjee}, journal={arXiv preprint arXiv:2410.06114}, year={2024}, archivePrefix={arXiv}, eprint={2410.06114}, primaryClass={cs.CV} }
reddy2024unsegarmanet:
arxiv-667121
2410.06115
A physics-based perspective for understanding and utilizing spatial resources of wireless channels
<|reference_start|>A physics-based perspective for understanding and utilizing spatial resources of wireless channels: To satisfy the increasing demands for transmission rates of wireless communications, it is necessary to use spatial resources of electromagnetic (EM) waves. In this context, EM information theory (EIT) has become a hot topic by integrating the theoretical framework of deterministic mathematics and stochastic statistics to explore the transmission mechanisms of continuous EM waves. However, the previous studies were primarily focused on frame analysis, with limited exploration of practical applications and a comprehensive understanding of its essential physical characteristics. In this paper, we present a three-dimensional (3-D) line-of-sight channel capacity formula that captures the vector EM physics and accommodates both near- and far-field scenes. Based on the rigorous mathematical equation and the physical mechanism of fast multipole expansion, a channel model is established, and the finite angular spectral bandwidth feature of scattered waves is revealed. To adapt to the feature of the channel, an optimization problem is formulated for determining the mode currents on the transmitter, aiming to obtain the optimal design of the precoder and combiner. We make comprehensive analyses to investigate the relationship among the spatial degree of freedom, noise, and transmitted power, thereby establishing a rigorous upper bound of channel capacity. A series of simulations are conducted to validate the theoretical model and numerical method. This work offers a novel perspective and methodology for understanding and leveraging EIT, and provides a theoretical foundation for the design and optimization of future wireless communications.<|reference_end|>
arxiv
@article{xu2024a, title={A physics-based perspective for understanding and utilizing spatial resources of wireless channels}, author={Hui Xu, Jun Wei Wu, Zhen Jie Qi, Hao Tian Wu, Rui Wen Shao, Qiang Cheng, Jieao Zhu, Linglong Dai, and Tie Jun Cui}, journal={arXiv preprint arXiv:2410.06115}, year={2024}, archivePrefix={arXiv}, eprint={2410.06115}, primaryClass={cs.IT eess.SP math.IT} }
xu2024a
arxiv-667122
2410.06118
Optimizing the Training Schedule of Multilingual NMT using Reinforcement Learning
<|reference_start|>Optimizing the Training Schedule of Multilingual NMT using Reinforcement Learning: Multilingual NMT is a viable solution for translating low-resource languages (LRLs) when data from high-resource languages (HRLs) from the same language family is available. However, the training schedule, i.e. the order of presentation of languages, has an impact on the quality of such systems. Here, in a many-to-one translation setting, we propose to apply two algorithms that use reinforcement learning to optimize the training schedule of NMT: (1) Teacher-Student Curriculum Learning and (2) Deep Q Network. The former uses an exponentially smoothed estimate of the returns of each action based on the loss on monolingual or multilingual development subsets, while the latter estimates rewards using an additional neural network trained from the history of actions selected in different states of the system, together with the rewards received. On a 8-to-1 translation dataset with LRLs and HRLs, our second method improves BLEU and COMET scores with respect to both random selection of monolingual batches and shuffled multilingual batches, by adjusting the number of presentations of LRL vs. HRL batches.<|reference_end|>
arxiv
@article{allemann2024optimizing, title={Optimizing the Training Schedule of Multilingual NMT using Reinforcement Learning}, author={Alexis Allemann, `Alex R. Atrio, Andrei Popescu-Belis}, journal={arXiv preprint arXiv:2410.06118}, year={2024}, archivePrefix={arXiv}, eprint={2410.06118}, primaryClass={cs.CL} }
allemann2024optimizing
arxiv-667123
2410.06119
E3STO: Orbital Inspired SE(3)-Equivariant Molecular Representation for Electron Density Prediction
<|reference_start|>E3STO: Orbital Inspired SE(3)-Equivariant Molecular Representation for Electron Density Prediction: Electron density prediction stands as a cornerstone challenge in molecular systems, pivotal for various applications such as understanding molecular interactions and conducting precise quantum mechanical calculations. However, the scaling of density functional theory (DFT) calculations is prohibitively expensive. Machine learning methods provide an alternative, offering efficiency and accuracy. We introduce a novel SE(3)-equivariant architecture, drawing inspiration from Slater-Type Orbitals (STO), to learn representations of molecular electronic structures. Our approach offers an alternative functional form for learned orbital-like molecular representation. We showcase the effectiveness of our method by achieving SOTA prediction accuracy of molecular electron density with 30-70\% improvement over other work on Molecular Dynamics data.<|reference_end|>
arxiv
@article{mitnikov2024e3sto:, title={E3STO: Orbital Inspired SE(3)-Equivariant Molecular Representation for Electron Density Prediction}, author={Ilan Mitnikov, Joseph Jacobson}, journal={arXiv preprint arXiv:2410.06119}, year={2024}, archivePrefix={arXiv}, eprint={2410.06119}, primaryClass={physics.chem-ph cs.LG q-bio.BM} }
mitnikov2024e3sto:
arxiv-667124
2410.06120
Uncertainty estimation via ensembles of deep learning models and dropout layers for seismic traces
<|reference_start|>Uncertainty estimation via ensembles of deep learning models and dropout layers for seismic traces: Deep learning models have demonstrated remarkable success in various fields, including seismology. However, one major challenge in deep learning is the presence of mislabeled examples. Additionally, accurately estimating model uncertainty is another challenge in machine learning. In this study, we develop Convolutional Neural Networks (CNNs) to classify seismic waveforms based on first-motion polarity. We trained multiple CNN models with different settings. We also constructed ensembles of networks to estimate uncertainty. The results showed that each training setting achieved satisfactory performances, with the ensemble method outperforming individual networks in uncertainty estimation. We observe that the uncertainty estimation ability of the ensembles of networks can be enhanced using dropout layers. In addition, comparisons among different training settings revealed that the use of dropout improved the robustness of networks to mislabeled examples.<|reference_end|>
arxiv
@article{messuti2024uncertainty, title={Uncertainty estimation via ensembles of deep learning models and dropout layers for seismic traces}, author={Giovanni Messuti, ortensia Amoroso, Ferdinando Napolitano, Mariarosaria Falanga, Paolo Capuano, Silvia Scarpetta}, journal={arXiv preprint arXiv:2410.06120}, year={2024}, archivePrefix={arXiv}, eprint={2410.06120}, primaryClass={cs.LG physics.data-an} }
messuti2024uncertainty
arxiv-667125
2410.06121
Less is More: Making Smaller Language Models Competent Subgraph Retrievers for Multi-hop KGQA
<|reference_start|>Less is More: Making Smaller Language Models Competent Subgraph Retrievers for Multi-hop KGQA: Retrieval-Augmented Generation (RAG) is widely used to inject external non-parametric knowledge into large language models (LLMs). Recent works suggest that Knowledge Graphs (KGs) contain valuable external knowledge for LLMs. Retrieving information from KGs differs from extracting it from document sets. Most existing approaches seek to directly retrieve relevant subgraphs, thereby eliminating the need for extensive SPARQL annotations, traditionally required by semantic parsing methods. In this paper, we model the subgraph retrieval task as a conditional generation task handled by small language models. Specifically, we define a subgraph identifier as a sequence of relations, each represented as a special token stored in the language models. Our base generative subgraph retrieval model, consisting of only 220M parameters, achieves competitive retrieval performance compared to state-of-the-art models relying on 7B parameters, demonstrating that small language models are capable of performing the subgraph retrieval task. Furthermore, our largest 3B model, when plugged with an LLM reader, sets new SOTA end-to-end performance on both the WebQSP and CWQ benchmarks. Our model and data will be made available online: https://github.com/hwy9855/GSR.<|reference_end|>
arxiv
@article{huang2024less, title={Less is More: Making Smaller Language Models Competent Subgraph Retrievers for Multi-hop KGQA}, author={Wenyu Huang, Guancheng Zhou, Hongru Wang, Pavlos Vougiouklis, Mirella Lapata, Jeff Z. Pan}, journal={arXiv preprint arXiv:2410.06121}, year={2024}, archivePrefix={arXiv}, eprint={2410.06121}, primaryClass={cs.CL} }
huang2024less
arxiv-667126
2410.06124
Learning AND-OR Templates for Professional Photograph Parsing and Guidance
<|reference_start|>Learning AND-OR Templates for Professional Photograph Parsing and Guidance: Since the development of photography art, many so-called "templates" have been formed, namely visual styles summarized from a series of themed and stylized photography works. In this paper, we propose to analysize and and summarize these 'templates' in photography by learning composite templates of photography images. We present a framework for learning a hierarchical reconfigurable image template from photography images to learn and characterize the "templates" used in these photography images. Using this method, we measured the artistic quality of photography on the photos and conducted photography guidance. In addition, we also utilized the "templates" for guidance in several image generation tasks. Experimental results show that the learned templates can well describe the photography techniques and styles, whereas the proposed approach can assess the quality of photography images as human being does.<|reference_end|>
arxiv
@article{jin2024learning, title={Learning AND-OR Templates for Professional Photograph Parsing and Guidance}, author={Xin Jin, Liaoruxing Zhang, Chenyu Fan, Wenbo Yuan}, journal={arXiv preprint arXiv:2410.06124}, year={2024}, archivePrefix={arXiv}, eprint={2410.06124}, primaryClass={cs.CV} }
jin2024learning
arxiv-667127
2410.06126
$\textitX^2$-DFD: A framework for e$X$plainable and e$X$tendable Deepfake Detection
<|reference_start|>$\textitX^2$-DFD: A framework for e$X$plainable and e$X$tendable Deepfake Detection: Detecting deepfakes has become an important task. Most existing detection methods provide only real/fake predictions without offering human-comprehensible explanations. Recent studies leveraging MLLMs for deepfake detection have shown improvements in explainability. However, the performance of pre-trained MLLMs (e.g., LLaVA) remains limited due to a lack of understanding of their capabilities for this task and strategies to enhance them. In this work, we empirically assess the strengths and weaknesses of MLLMs specifically in deepfake detection via forgery features analysis. Building on these assessments, we propose a novel framework called ${X}^2$-DFD, consisting of three core modules. The first module, Model Feature Assessment (MFA), measures the detection capabilities of forgery features intrinsic to MLLMs, and gives a descending ranking of these features. The second module, Strong Feature Strengthening (SFS), enhances the detection and explanation capabilities by fine-tuning the MLLM on a dataset constructed based on the top-ranked features. The third module, Weak Feature Supplementing (WFS), improves the fine-tuned MLLM's capabilities on lower-ranked features by integrating external dedicated deepfake detectors. To verify the effectiveness of this framework, we further present a practical implementation, where an automated forgery features generation, evaluation, and ranking procedure is designed for MFA module; an automated generation procedure of the fine-tuning dataset containing real and fake images with explanations based on top-ranked features is developed for SFS model; an external conventional deepfake detector focusing on blending artifact, which corresponds to a low detection capability in the pre-trained MLLM, is integrated for WFS module. Experiments show that our approach enhances both detection and explanation performance.<|reference_end|>
arxiv
@article{chen2024$\textit{x}^2$-dfd:, title={$\textit{X}^2$-DFD: A framework for e${X}$plainable and e${X}$tendable Deepfake Detection}, author={Yize Chen, Zhiyuan Yan, Siwei Lyu, Baoyuan Wu}, journal={arXiv preprint arXiv:2410.06126}, year={2024}, archivePrefix={arXiv}, eprint={2410.06126}, primaryClass={cs.CV} }
chen2024$\textit{x}^2$-dfd:
arxiv-667128
2410.06127
De-VertiFL: A Solution for Decentralized Vertical Federated Learning
<|reference_start|>De-VertiFL: A Solution for Decentralized Vertical Federated Learning: Federated Learning (FL), introduced in 2016, was designed to enhance data privacy in collaborative model training environments. Among the FL paradigm, horizontal FL, where clients share the same set of features but different data samples, has been extensively studied in both centralized and decentralized settings. In contrast, Vertical Federated Learning (VFL), which is crucial in real-world decentralized scenarios where clients possess different, yet sensitive, data about the same entity, remains underexplored. Thus, this work introduces De-VertiFL, a novel solution for training models in a decentralized VFL setting. De-VertiFL contributes by introducing a new network architecture distribution, an innovative knowledge exchange scheme, and a distributed federated training process. Specifically, De-VertiFL enables the sharing of hidden layer outputs among federation clients, allowing participants to benefit from intermediate computations, thereby improving learning efficiency. De-VertiFL has been evaluated using a variety of well-known datasets, including both image and tabular data, across binary and multiclass classification tasks. The results demonstrate that De-VertiFL generally surpasses state-of-the-art methods in F1-score performance, while maintaining a decentralized and privacy-preserving framework.<|reference_end|>
arxiv
@article{celdrán2024de-vertifl:, title={De-VertiFL: A Solution for Decentralized Vertical Federated Learning}, author={Alberto Huertas Celdr'an, Chao Feng, Sabyasachi Banik, Gerome Bovet, Gregorio Martinez Perez, Burkhard Stiller}, journal={arXiv preprint arXiv:2410.06127}, year={2024}, archivePrefix={arXiv}, eprint={2410.06127}, primaryClass={cs.LG} }
celdrán2024de-vertifl:
arxiv-667129
2410.06128
Zero-Shot Learning of Causal Models
<|reference_start|>Zero-Shot Learning of Causal Models: With the increasing acquisition of datasets over time, we now have access to precise and varied descriptions of the world, capturing all sorts of phenomena. These datasets can be seen as empirical observations of unknown causal generative processes, which can commonly be described by Structural Causal Models (SCMs). Recovering these causal generative processes from observations poses formidable challenges, and often require to learn a specific generative model for each dataset. In this work, we propose to learn a \emph{single} model capable of inferring in a zero-shot manner the causal generative processes of datasets. Rather than learning a specific SCM for each dataset, we enable the Fixed-Point Approach (FiP) proposed in~\cite{scetbon2024fip}, to infer the generative SCMs conditionally on their empirical representations. More specifically, we propose to amortize the learning of a conditional version of FiP to infer generative SCMs from observations and causal structures on synthetically generated datasets. We show that our model is capable of predicting in zero-shot the true generative SCMs, and as a by-product, of (i) generating new dataset samples, and (ii) inferring intervened ones. Our experiments demonstrate that our amortized procedure achieves performances on par with SoTA methods trained specifically for each dataset on both in and out-of-distribution problems. To the best of our knowledge, this is the first time that SCMs are inferred in a zero-shot manner from observations, paving the way for a paradigmatic shift towards the assimilation of causal knowledge across datasets.<|reference_end|>
arxiv
@article{mahajan2024zero-shot, title={Zero-Shot Learning of Causal Models}, author={Divyat Mahajan, Jannes Gladrow, Agrin Hilmkil, Cheng Zhang, Meyer Scetbon}, journal={arXiv preprint arXiv:2410.06128}, year={2024}, archivePrefix={arXiv}, eprint={2410.06128}, primaryClass={cs.LG stat.ML} }
mahajan2024zero-shot
arxiv-667130
2410.06131
Towards Unsupervised Eye-Region Segmentation for Eye Tracking
<|reference_start|>Towards Unsupervised Eye-Region Segmentation for Eye Tracking: Finding the eye and parsing out the parts (e.g. pupil and iris) is a key prerequisite for image-based eye tracking, which has become an indispensable module in today's head-mounted VR/AR devices. However, a typical route for training a segmenter requires tedious handlabeling. In this work, we explore an unsupervised way. First, we utilize priors of human eye and extract signals from the image to establish rough clues indicating the eye-region structure. Upon these sparse and noisy clues, a segmentation network is trained to gradually identify the precise area for each part. To achieve accurate parsing of the eye-region, we first leverage the pretrained foundation model Segment Anything (SAM) in an automatic way to refine the eye indications. Then, the learning process is designed in an end-to-end manner following progressive and prior-aware principle. Experiments show that our unsupervised approach can easily achieve 90% (the pupil and iris) and 85% (the whole eye-region) of the performances under supervised learning.<|reference_end|>
arxiv
@article{deng2024towards, title={Towards Unsupervised Eye-Region Segmentation for Eye Tracking}, author={Jiangfan Deng, Zhuang Jia, Zhaoxue Wang, Xiang Long, Daniel K. Du}, journal={arXiv preprint arXiv:2410.06131}, year={2024}, archivePrefix={arXiv}, eprint={2410.06131}, primaryClass={cs.CV} }
deng2024towards
arxiv-667131
2410.06132
Spread blow-up lemma with an application to perturbed random graphs
<|reference_start|>Spread blow-up lemma with an application to perturbed random graphs: Combining ideas of Pham, Sah, Sawhney, and Simkin on spread perfect matchings in super-regular bipartite graphs with an algorithmic blow-up lemma, we prove a spread version of the blow-up lemma. Intuitively, this means that there exists a probability measure over copies of a desired spanning graph $H$ in a given system of super-regular pairs which does not heavily pin down any subset of vertices. This allows one to complement the use of the blow-up lemma with the recently resolved Kahn-Kalai conjecture. As an application, we prove an approximate version of a conjecture of B\"ottcher, Parczyk, Sgueglia, and Skokan on the threshold for appearance of powers of Hamilton cycles in perturbed random graphs.<|reference_end|>
arxiv
@article{nenadov2024spread, title={Spread blow-up lemma with an application to perturbed random graphs}, author={Rajko Nenadov, Huy Tuan Pham}, journal={arXiv preprint arXiv:2410.06132}, year={2024}, archivePrefix={arXiv}, eprint={2410.06132}, primaryClass={math.CO cs.DM math.PR} }
nenadov2024spread
arxiv-667132
2410.06134
Adaptive Label Smoothing for Out-of-Distribution Detection
<|reference_start|>Adaptive Label Smoothing for Out-of-Distribution Detection: Out-of-distribution (OOD) detection, which aims to distinguish unknown classes from known classes, has received increasing attention recently. A main challenge within is the unavailable of samples from the unknown classes in the training process, and an effective strategy is to improve the performance for known classes. Using beneficial strategies such as data augmentation and longer training is thus a way to improve OOD detection. However, label smoothing, an effective method for classifying known classes, degrades the performance of OOD detection, and this phenomenon is under exploration. In this paper, we first analyze that the limited and predefined learning target in label smoothing results in the smaller maximal probability and logit, which further leads to worse OOD detection performance. To mitigate this issue, we then propose a novel regularization method, called adaptive label smoothing (ALS), and the core is to push the non-true classes to have same probabilities whereas the maximal probability is neither fixed nor limited. Extensive experimental results in six datasets with two backbones suggest that ALS contributes to classifying known samples and discerning unknown samples with clear margins. Our code will be available to the public.<|reference_end|>
arxiv
@article{xu2024adaptive, title={Adaptive Label Smoothing for Out-of-Distribution Detection}, author={Mingle Xu and Jaehwan Lee and Sook Yoon and Dong Sun Park}, journal={arXiv preprint arXiv:2410.06134}, year={2024}, archivePrefix={arXiv}, eprint={2410.06134}, primaryClass={cs.CV} }
xu2024adaptive
arxiv-667133
2410.06139
Flips in Odd Matchings
<|reference_start|>Flips in Odd Matchings: Let $\mathcal{P}$ be a set of $n=2m+1$ points in the plane in general position. We define the graph $GM_\mathcal{P}$ whose vertex set is the set of all plane matchings on $\mathcal{P}$ with exactly $m$ edges. Two vertices in $GM_\mathcal{P}$ are connected if the two corresponding matchings have $m-1$ edges in common. In this work we show that $GM_\mathcal{P}$ is connected and give an upper bound of $O(n^2)$ on its diameter. Moreover, we present a tight bound of $\Theta(n)$ for the diameter of the flip graph of points in convex position.<|reference_end|>
arxiv
@article{aichholzer2024flips, title={Flips in Odd Matchings}, author={Oswin Aichholzer, Anna Br"otzner, Daniel Perz, Patrick Schnider}, journal={arXiv preprint arXiv:2410.06139}, year={2024}, archivePrefix={arXiv}, eprint={2410.06139}, primaryClass={cs.CG math.CO} }
aichholzer2024flips
arxiv-667134
2410.06140
Estimating the Number of HTTP/3 Responses in QUIC Using Deep Learning
<|reference_start|>Estimating the Number of HTTP/3 Responses in QUIC Using Deep Learning: QUIC, a new and increasingly used transport protocol, enhances TCP by providing better security, performance, and features like stream multiplexing. These features, however, also impose challenges for network middle-boxes that need to monitor and analyze web traffic. This paper proposes a novel solution for estimating the number of HTTP/3 responses in a given QUIC connection by an observer. This estimation reveals server behavior, client-server interactions, and data transmission efficiency, which is crucial for various applications such as designing a load balancing solution and detecting HTTP/3 flood attacks. The proposed scheme transforms QUIC connection traces into a sequence of images and trains machine learning (ML) models to predict the number of responses. Then, by aggregating images of a QUIC connection, an observer can estimate the total number of responses. As the problem is formulated as a discrete regression problem, we introduce a dedicated loss function. The proposed scheme is evaluated on a dataset of over seven million images, generated from $100,000$ traces collected from over $44,000$ websites over a four-month period, from various vantage points. The scheme achieves up to 97\% cumulative accuracy in both known and unknown web server settings and 92\% accuracy in estimating the total number of responses in unseen QUIC traces.<|reference_end|>
arxiv
@article{gahtan2024estimating, title={Estimating the Number of HTTP/3 Responses in QUIC Using Deep Learning}, author={Barak Gahtan, Robert J. Shahla, Reuven Cohen, Alex M. Bronstein}, journal={arXiv preprint arXiv:2410.06140}, year={2024}, archivePrefix={arXiv}, eprint={2410.06140}, primaryClass={cs.LG cs.CV cs.NI} }
gahtan2024estimating
arxiv-667135
2410.06143
blockLAW: Blockchain Technology for Legal Automation and Workflow -- Cyber Ethics and Cybersecurity Platforms
<|reference_start|>blockLAW: Blockchain Technology for Legal Automation and Workflow -- Cyber Ethics and Cybersecurity Platforms: In the current legal environment, it is essential to prioritize the protection and reliability of data to promote trust and effectiveness. This study examines how blockchain technology in the form of blockLAW can be applicable to investigate its effects on legal automation, cybersecurity, and ethical concerns. The decentralized ledger and unchangeable characteristics of Blockchain provide opportunities to simplify legal procedures, automate contract execution with smart contracts, and improve transparency in legal transactions. Blockchain is seen as a crucial instrument for updating legal processes while maintaining ethical standards, tackling issues like scalability, regulatory adherence, and ethical dilemmas such as privacy and fairness. The study examines recent developments and evaluates blockchain impact on legal structures, offering perspectives on its potential to enhance legal procedures and guarantee transparency in legal systems. It further emphasizes blockchain ability to redefine how legal professionals handle and protect sensitive information, leading to stronger, more effective, and reliable legal procedures. We have also discussed the technological considerations when it comes to blockchain integration into legal systems like integration planning, implementation strategies, innovations, advancements, trends with Blockchain Integration Framework for legal systems.<|reference_end|>
arxiv
@article{pokharel2024blocklaw:, title={blockLAW: Blockchain Technology for Legal Automation and Workflow -- Cyber Ethics and Cybersecurity Platforms}, author={Bishwo Prakash Pokharel, Naresh Kshetri}, journal={arXiv preprint arXiv:2410.06143}, year={2024}, archivePrefix={arXiv}, eprint={2410.06143}, primaryClass={cs.CR} }
pokharel2024blocklaw:
arxiv-667136
2410.06145
Serverless Cold Starts and Where to Find Them
<|reference_start|>Serverless Cold Starts and Where to Find Them: This paper releases and analyzes a month-long trace of 85 billion user requests and 11.9 million cold starts from Huawei's serverless cloud platform. Our analysis spans workloads from five data centers. We focus on cold starts and provide a comprehensive examination of the underlying factors influencing the number and duration of cold starts. These factors include trigger types, request synchronicity, runtime languages, and function resource allocations. We investigate components of cold starts, including pod allocation time, code and dependency deployment time, and scheduling delays, and examine their relationships with runtime languages, trigger types, and resource allocation. We introduce pod utility ratio to measure the pod's useful lifetime relative to its cold start time, giving a more complete picture of cold starts, and see that some pods with long cold start times have longer useful lifetimes. Our findings reveal the complexity and multifaceted origins of the number, duration, and characteristics of cold starts, driven by differences in trigger types, runtime languages, and function resource allocations. For example, cold starts in Region 1 take up to 7 seconds, dominated by dependency deployment time and scheduling. In Region 2, cold starts take up to 3 seconds and are dominated by pod allocation time. Based on this, we identify opportunities to reduce the number and duration of cold starts using strategies for multi-region scheduling. Finally, we suggest directions for future research to address these challenges and enhance the performance of serverless cloud platforms. Our datasets and code are available here https://github.com/sir-lab/data-release<|reference_end|>
arxiv
@article{joosen2024serverless, title={Serverless Cold Starts and Where to Find Them}, author={Artjom Joosen, Ahmed Hassan, Martin Asenov, Rajkarn Singh, Luke Darlow, Jianfeng Wang, Qiwen Deng, Adam Barker}, journal={arXiv preprint arXiv:2410.06145}, year={2024}, archivePrefix={arXiv}, eprint={2410.06145}, primaryClass={cs.DC cs.OS cs.PF} }
joosen2024serverless
arxiv-667137
2410.06149
Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach
<|reference_start|>Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach: Traditional image codecs emphasize signal fidelity and human perception, often at the expense of machine vision tasks. Deep learning methods have demonstrated promising coding performance by utilizing rich semantic embeddings optimized for both human and machine vision. However, these compact embeddings struggle to capture fine details such as contours and textures, resulting in imperfect reconstructions. Furthermore, existing learning-based codecs lack scalability. To address these limitations, this paper introduces a content-adaptive diffusion model for scalable image compression. The proposed method encodes fine textures through a diffusion process, enhancing perceptual quality while preserving essential features for machine vision tasks. The approach employs a Markov palette diffusion model combined with widely used feature extractors and image generators, enabling efficient data compression. By leveraging collaborative texture-semantic feature extraction and pseudo-label generation, the method accurately captures texture information. A content-adaptive Markov palette diffusion model is then applied to represent both low-level textures and high-level semantic content in a scalable manner. This framework offers flexible control over compression ratios by selecting intermediate diffusion states, eliminating the need for retraining deep learning models at different operating points. Extensive experiments demonstrate the effectiveness of the proposed framework in both image reconstruction and downstream machine vision tasks such as object detection, segmentation, and facial landmark detection, achieving superior perceptual quality compared to state-of-the-art methods.<|reference_end|>
arxiv
@article{guo2024toward, title={Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach}, author={Sha Guo, Zhuo Chen, Yang Zhao, Ning Zhang, Xiaotong Li, Lingyu Duan}, journal={in Proceedings of the 31st ACM International Conference on Multimedia, pp. 1431-1442, 2023}, year={2024}, doi={10.1145/3581783.3611851}, archivePrefix={arXiv}, eprint={2410.06149}, primaryClass={cs.CV cs.MM eess.IV} }
guo2024toward
arxiv-667138
2410.06151
Quality Diversity Imitation Learning
<|reference_start|>Quality Diversity Imitation Learning: Imitation learning (IL) has shown great potential in various applications, such as robot control. However, traditional IL methods are usually designed to learn only one specific type of behavior since demonstrations typically correspond to a single expert. In this work, we introduce the first generic framework for Quality Diversity Imitation Learning (QD-IL), which enables the agent to learn a broad range of skills from limited demonstrations. Our framework integrates the principles of quality diversity with adversarial imitation learning (AIL) methods, and can potentially improve any inverse reinforcement learning (IRL) method. Empirically, our framework significantly improves the QD performance of GAIL and VAIL on the challenging continuous control tasks derived from Mujoco environments. Moreover, our method even achieves 2x expert performance in the most challenging Humanoid environment.<|reference_end|>
arxiv
@article{wan2024quality, title={Quality Diversity Imitation Learning}, author={Zhenglin Wan, Xingrui Yu, David Mark Bossens, Yueming Lyu, Qing Guo, Flint Xiaofeng Fan, Ivor Tsang}, journal={arXiv preprint arXiv:2410.06151}, year={2024}, archivePrefix={arXiv}, eprint={2410.06151}, primaryClass={cs.LG cs.AI} }
wan2024quality
arxiv-667139
2410.06153
AgentSquare: Automatic LLM Agent Search in Modular Design Space
<|reference_start|>AgentSquare: Automatic LLM Agent Search in Modular Design Space: Recent advancements in Large Language Models (LLMs) have led to a rapid growth of agentic systems capable of handling a wide range of complex tasks. However, current research largely relies on manual, task-specific design, limiting their adaptability to novel tasks. In this paper, we introduce a new research problem: Modularized LLM Agent Search (MoLAS). We propose a modular design space that abstracts existing LLM agent designs into four fundamental modules with uniform IO interface: Planning, Reasoning, Tool Use, and Memory. Building on this design space, we present a novel LLM agent search framework called AgentSquare, which introduces two core mechanisms, i.e., module evolution and recombination, to efficiently search for optimized LLM agents. To further accelerate the process, we design a performance predictor that uses in-context surrogate models to skip unpromising agent designs. Extensive experiments across six benchmarks, covering the diverse scenarios of web, embodied, tool use and game applications, show that AgentSquare substantially outperforms hand-crafted agents, achieving an average performance gain of 17.2% against best-known human designs. Moreover, AgentSquare can generate interpretable design insights, enabling a deeper understanding of agentic architecture and its impact on task performance. We believe that the modular design space and AgentSquare search framework offer a platform for fully exploiting the potential of prior successful designs and consolidating the collective efforts of research community. Code repo is available at https://github.com/tsinghua-fib-lab/AgentSquare.<|reference_end|>
arxiv
@article{shang2024agentsquare:, title={AgentSquare: Automatic LLM Agent Search in Modular Design Space}, author={Yu Shang, Yu Li, Keyu Zhao, Likai Ma, Jiahe Liu, Fengli Xu, Yong Li}, journal={arXiv preprint arXiv:2410.06153}, year={2024}, archivePrefix={arXiv}, eprint={2410.06153}, primaryClass={cs.CL} }
shang2024agentsquare:
arxiv-667140
2410.06154
GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
<|reference_start|>GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models: In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to a purity measure obtained through a fitness function. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation process in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.<|reference_end|>
arxiv
@article{mirza2024glov:, title={GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models}, author={M. Jehanzeb Mirza, Mengjie Zhao, Zhuoyuan Mao, Sivan Doveh, Wei Lin, Paul Gavrikov, Michael Dorkenwald, Shiqi Yang, Saurav Jha, Hiromi Wakaki, Yuki Mitsufuji, Horst Possegger, Rogerio Feris, Leonid Karlinsky, James Glass}, journal={arXiv preprint arXiv:2410.06154}, year={2024}, archivePrefix={arXiv}, eprint={2410.06154}, primaryClass={cs.CV} }
mirza2024glov:
arxiv-667141
2410.06157
Detecting Android Malware by Visualizing App Behaviors from Multiple Complementary Views
<|reference_start|>Detecting Android Malware by Visualizing App Behaviors from Multiple Complementary Views: Deep learning has emerged as a promising technology for achieving Android malware detection. To further unleash its detection potentials, software visualization can be integrated for analyzing the details of app behaviors clearly. However, facing increasingly sophisticated malware, existing visualization-based methods, analyzing from one or randomly-selected few views, can only detect limited attack types. We propose and implement LensDroid, a novel technique that detects Android malware by visualizing app behaviors from multiple complementary views. Our goal is to harness the power of combining deep learning and software visualization to automatically capture and aggregate high-level features that are not inherently linked, thereby revealing hidden maliciousness of Android app behaviors. To thoroughly comprehend the details of apps, we visualize app behaviors from three related but distinct views of behavioral sensitivities, operational contexts and supported environments. We then extract high-order semantics based on the views accordingly. To exploit semantic complementarity of the views, we design a deep neural network based model for fusing the visualized features from local to global based on their contributions to downstream tasks. A comprehensive comparison with five baseline techniques is performed on datasets of more than 51K apps in three real-world typical scenarios, including overall threats, app evolution and zero-day malware. The experimental results show that the overall performance of LensDroid is better than the baseline techniques. We also validate the complementarity of the views and demonstrate that the multi-view fusion in LensDroid enhances Android malware detection.<|reference_end|>
arxiv
@article{meng2024detecting, title={Detecting Android Malware by Visualizing App Behaviors from Multiple Complementary Views}, author={Zhaoyi Meng, Jiale Zhang, Jiaqi Guo, Wansen Wang, Wenchao Huang, Jie Cui, Hong Zhong, and Yan Xiong}, journal={arXiv preprint arXiv:2410.06157}, year={2024}, archivePrefix={arXiv}, eprint={2410.06157}, primaryClass={cs.CR cs.SE} }
meng2024detecting
arxiv-667142
2410.06158
GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation
<|reference_start|>GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation: We present GR-2, a state-of-the-art generalist robot agent for versatile and generalizable robot manipulation. GR-2 is first pre-trained on a vast number of Internet videos to capture the dynamics of the world. This large-scale pre-training, involving 38 million video clips and over 50 billion tokens, equips GR-2 with the ability to generalize across a wide range of robotic tasks and environments during subsequent policy learning. Following this, GR-2 is fine-tuned for both video generation and action prediction using robot trajectories. It exhibits impressive multi-task learning capabilities, achieving an average success rate of 97.7% across more than 100 tasks. Moreover, GR-2 demonstrates exceptional generalization to new, previously unseen scenarios, including novel backgrounds, environments, objects, and tasks. Notably, GR-2 scales effectively with model size, underscoring its potential for continued growth and application. Project page: \url{https://gr2-manipulation.github.io}.<|reference_end|>
arxiv
@article{cheang2024gr-2:, title={GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation}, author={Chi-Lam Cheang, Guangzeng Chen, Ya Jing, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Hongtao Wu, Jiafeng Xu, Yichu Yang, Hanbo Zhang, Minzhao Zhu}, journal={arXiv preprint arXiv:2410.06158}, year={2024}, archivePrefix={arXiv}, eprint={2410.06158}, primaryClass={cs.RO cs.CV cs.LG} }
cheang2024gr-2:
arxiv-667143
2410.06161
Automated quality assessment using appearance-based simulations and hippocampus segmentation on low-field paediatric brain MR images
<|reference_start|>Automated quality assessment using appearance-based simulations and hippocampus segmentation on low-field paediatric brain MR images: Understanding the structural growth of paediatric brains is a key step in the identification of various neuro-developmental disorders. However, our knowledge is limited by many factors, including the lack of automated image analysis tools, especially in Low and Middle Income Countries from the lack of high field MR images available. Low-field systems are being increasingly explored in these countries, and, therefore, there is a need to develop automated image analysis tools for these images. In this work, as a preliminary step, we consider two tasks: 1) automated quality assurance and 2) hippocampal segmentation, where we compare multiple approaches. For the automated quality assurance task a DenseNet combined with appearance-based transformations for synthesising artefacts produced the best performance, with a weighted accuracy of 82.3%. For the segmentation task, registration of an average atlas performed the best, with a final Dice score of 0.61. Our results show that although the images can provide understanding of large scale pathologies and gross scale anatomical development, there still remain barriers for their use for more granular analyses.<|reference_end|>
arxiv
@article{sundaresan2024automated, title={Automated quality assessment using appearance-based simulations and hippocampus segmentation on low-field paediatric brain MR images}, author={Vaanathi Sundaresan, Nicola K Dinsdale}, journal={arXiv preprint arXiv:2410.06161}, year={2024}, archivePrefix={arXiv}, eprint={2410.06161}, primaryClass={eess.IV cs.CV} }
sundaresan2024automated
arxiv-667144
2410.06163
Likelihood-based Differentiable Structure Learning
<|reference_start|>Likelihood-based Differentiable Structure Learning: Existing approaches to differentiable structure learning of directed acyclic graphs (DAGs) rely on strong identifiability assumptions in order to guarantee that global minimizers of the acyclicity-constrained optimization problem identifies the true DAG. Moreover, it has been observed empirically that the optimizer may exploit undesirable artifacts in the loss function. We explain and remedy these issues by studying the behavior of differentiable acyclicity-constrained programs under general likelihoods with multiple global minimizers. By carefully regularizing the likelihood, it is possible to identify the sparsest model in the Markov equivalence class, even in the absence of an identifiable parametrization. We first study the Gaussian case in detail, showing how proper regularization of the likelihood defines a score that identifies the sparsest model. Assuming faithfulness, it also recovers the Markov equivalence class. These results are then generalized to general models and likelihoods, where the same claims hold. These theoretical results are validated empirically, showing how this can be done using standard gradient-based optimizers, thus paving the way for differentiable structure learning under general models and losses.<|reference_end|>
arxiv
@article{deng2024likelihood-based, title={Likelihood-based Differentiable Structure Learning}, author={Chang Deng, Kevin Bello, Pradeep Ravikumar, Bryon Aragam}, journal={arXiv preprint arXiv:2410.06163}, year={2024}, archivePrefix={arXiv}, eprint={2410.06163}, primaryClass={stat.ML cs.LG math.ST stat.ME stat.TH} }
deng2024likelihood-based
arxiv-667145
2410.06165
GSLoc: Visual Localization with 3D Gaussian Splatting
<|reference_start|>GSLoc: Visual Localization with 3D Gaussian Splatting: We present GSLoc: a new visual localization method that performs dense camera alignment using 3D Gaussian Splatting as a map representation of the scene. GSLoc backpropagates pose gradients over the rendering pipeline to align the rendered and target images, while it adopts a coarse-to-fine strategy by utilizing blurring kernels to mitigate the non-convexity of the problem and improve the convergence. The results show that our approach succeeds at visual localization in challenging conditions of relatively small overlap between initial and target frames inside textureless environments when state-of-the-art neural sparse methods provide inferior results. Using the byproduct of realistic rendering from the 3DGS map representation, we show how to enhance localization results by mixing a set of observed and virtual reference keyframes when solving the image retrieval problem. We evaluate our method both on synthetic and real-world data, discussing its advantages and application potential.<|reference_end|>
arxiv
@article{botashev2024gsloc:, title={GSLoc: Visual Localization with 3D Gaussian Splatting}, author={Kazii Botashev, Vladislav Pyatov, Gonzalo Ferrer, Stamatios Lefkimmiatis}, journal={arXiv preprint arXiv:2410.06165}, year={2024}, archivePrefix={arXiv}, eprint={2410.06165}, primaryClass={cs.RO} }
botashev2024gsloc:
arxiv-667146
2410.06166
Temporal Reasoning Transfer from Text to Video
<|reference_start|>Temporal Reasoning Transfer from Text to Video: Video Large Language Models (Video LLMs) have shown promising capabilities in video comprehension, yet they struggle with tracking temporal changes and reasoning about temporal relationships. While previous research attributed this limitation to the ineffective temporal encoding of visual inputs, our diagnostic study reveals that video representations contain sufficient information for even small probing classifiers to achieve perfect accuracy. Surprisingly, we find that the key bottleneck in Video LLMs' temporal reasoning capability stems from the underlying LLM's inherent difficulty with temporal concepts, as evidenced by poor performance on textual temporal question-answering tasks. Building on this discovery, we introduce the Textual Temporal reasoning Transfer (T3). T3 synthesizes diverse temporal reasoning tasks in pure text format from existing image-text datasets, addressing the scarcity of video samples with complex temporal scenarios. Remarkably, without using any video data, T3 enhances LongVA-7B's temporal understanding, yielding a 5.3 absolute accuracy improvement on the challenging TempCompass benchmark, which enables our model to outperform ShareGPT4Video-8B trained on 28,000 video samples. Additionally, the enhanced LongVA-7B model achieves competitive performance on comprehensive video benchmarks. For example, it achieves a 49.7 accuracy on the Temporal Reasoning task of Video-MME, surpassing powerful large-scale models such as InternVL-Chat-V1.5-20B and VILA1.5-40B. Further analysis reveals a strong correlation between textual and video temporal task performance, validating the efficacy of transferring temporal reasoning abilities from text to video domains.<|reference_end|>
arxiv
@article{li2024temporal, title={Temporal Reasoning Transfer from Text to Video}, author={Lei Li, Yuanxin Liu, Linli Yao, Peiyuan Zhang, Chenxin An, Lean Wang, Xu Sun, Lingpeng Kong, Qi Liu}, journal={arXiv preprint arXiv:2410.06166}, year={2024}, archivePrefix={arXiv}, eprint={2410.06166}, primaryClass={cs.CV cs.CL} }
li2024temporal
arxiv-667147
2410.06169
Quadratic Is Not What You Need For Multimodal Large Language Models
<|reference_start|>Quadratic Is Not What You Need For Multimodal Large Language Models: In the past year, the capabilities of Multimodal Large Language Models (MLLMs) have significantly improved across various aspects. However, constrained by the quadratic growth of computation in LLMs as the number of tokens increases, efficiency has become a bottleneck for further scaling MLLMs. Although recent efforts have been made to prune visual tokens or use more lightweight LLMs to reduce computation, the problem of quadratic growth in computation with the increase of visual tokens still persists. To address this, we propose a novel approach: instead of reducing the input visual tokens for LLMs, we focus on pruning vision-related computations within the LLMs. After pruning, the computation growth in the LLM is no longer quadratic with the increase of visual tokens, but linear. Surprisingly, we found that after applying such extensive pruning, the capabilities of MLLMs are comparable with the original one and even superior on some benchmarks with only 25% of the computation. This finding opens up the possibility for MLLMs to incorporate much denser visual tokens. Additionally, based on this finding, we further analyzed some architectural design deficiencies in existing MLLMs and proposed promising improvements. To the best of our knowledge, this is the first study to investigate the computational redundancy in the LLM's vision component of MLLMs. Code and checkpoints will be released soon.<|reference_end|>
arxiv
@article{pham2024quadratic, title={Quadratic Is Not What You Need For Multimodal Large Language Models}, author={Phu Pham, Wentian Zhao, Kun Wan, Yu-Jhe Li, Zeliang Zhang, Daniel Miranda, Ajinkya Kale, Chenliang Xu}, journal={arXiv preprint arXiv:2410.06169}, year={2024}, archivePrefix={arXiv}, eprint={2410.06169}, primaryClass={cs.CV} }
pham2024quadratic
arxiv-667148
2410.06170
QGym: Scalable Simulation and Benchmarking of Queuing Network Controllers
<|reference_start|>QGym: Scalable Simulation and Benchmarking of Queuing Network Controllers: Queuing network control determines the allocation of scarce resources to manage congestion, a fundamental problem in manufacturing, communications, and healthcare. Compared to standard RL problems, queueing problems are distinguished by unique challenges: i) a system operating in continuous time, ii) high stochasticity, and iii) long horizons over which the system can become unstable (exploding delays). To spur methodological progress tackling these challenges, we present an open-sourced queueing simulation framework, QGym, that benchmark queueing policies across realistic problem instances. Our modular framework allows the researchers to build on our initial instances, which provide a wide range of environments including parallel servers, criss-cross, tandem, and re-entrant networks, as well as a realistically calibrated hospital queuing system. QGym makes it easy to compare multiple policies, including both model-free RL methods and classical queuing policies. Our testbed complements the traditional focus on evaluating algorithms based on mathematical guarantees in idealized settings, and significantly expands the scope of empirical benchmarking in prior work. QGym code is open-sourced at https://github.com/namkoong-lab/QGym.<|reference_end|>
arxiv
@article{chen2024qgym:, title={QGym: Scalable Simulation and Benchmarking of Queuing Network Controllers}, author={Haozhe Chen, Ang Li, Ethan Che, Tianyi Peng, Jing Dong, Hongseok Namkoong}, journal={arXiv preprint arXiv:2410.06170}, year={2024}, archivePrefix={arXiv}, eprint={2410.06170}, primaryClass={cs.LG cs.SY eess.SY} }
chen2024qgym:
arxiv-667149
2410.06171
Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines
<|reference_start|>Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines: Recent work developed convolutional deep kernel machines, achieving 92.7% test accuracy on CIFAR-10 using a ResNet-inspired architecture, which is SOTA for kernel methods. However, this still lags behind neural networks, which easily achieve over 94% test accuracy with similar architectures. In this work we introduce several modifications to improve the convolutional deep kernel machine's generalisation, including stochastic kernel regularisation, which adds noise to the learned Gram matrices during training. The resulting model achieves 94.5% test accuracy on CIFAR-10. This finding has important theoretical and practical implications, as it demonstrates that the ability to perform well on complex tasks like image classification is not unique to neural networks. Instead, other approaches including deep kernel methods can achieve excellent performance on such tasks, as long as they have the capacity to learn representations from data.<|reference_end|>
arxiv
@article{milsom2024stochastic, title={Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines}, author={Edward Milsom, Ben Anson, Laurence Aitchison}, journal={arXiv preprint arXiv:2410.06171}, year={2024}, archivePrefix={arXiv}, eprint={2410.06171}, primaryClass={stat.ML cs.LG} }
milsom2024stochastic
arxiv-667150
2410.06172
Multimodal Situational Safety
<|reference_start|>Multimodal Situational Safety: Multimodal Large Language Models (MLLMs) are rapidly evolving, demonstrating impressive capabilities as multimodal assistants that interact with both humans and their environments. However, this increased sophistication introduces significant safety concerns. In this paper, we present the first evaluation and analysis of a novel safety challenge termed Multimodal Situational Safety, which explores how safety considerations vary based on the specific situation in which the user or agent is engaged. We argue that for an MLLM to respond safely, whether through language or action, it often needs to assess the safety implications of a language query within its corresponding visual context. To evaluate this capability, we develop the Multimodal Situational Safety benchmark (MSSBench) to assess the situational safety performance of current MLLMs. The dataset comprises 1,820 language query-image pairs, half of which the image context is safe, and the other half is unsafe. We also develop an evaluation framework that analyzes key safety aspects, including explicit safety reasoning, visual understanding, and, crucially, situational safety reasoning. Our findings reveal that current MLLMs struggle with this nuanced safety problem in the instruction-following setting and struggle to tackle these situational safety challenges all at once, highlighting a key area for future research. Furthermore, we develop multi-agent pipelines to coordinately solve safety challenges, which shows consistent improvement in safety over the original MLLM response. Code and data: mssbench.github.io.<|reference_end|>
arxiv
@article{zhou2024multimodal, title={Multimodal Situational Safety}, author={Kaiwen Zhou, Chengzhi Liu, Xuandong Zhao, Anderson Compalas, Dawn Song, Xin Eric Wang}, journal={arXiv preprint arXiv:2410.06172}, year={2024}, archivePrefix={arXiv}, eprint={2410.06172}, primaryClass={cs.AI cs.CL} }
zhou2024multimodal
arxiv-667151
2410.06173
Manual Verbalizer Enrichment for Few-Shot Text Classification
<|reference_start|>Manual Verbalizer Enrichment for Few-Shot Text Classification: With the continuous development of pre-trained language models, prompt-based training becomes a well-adopted paradigm that drastically improves the exploitation of models for many natural language processing tasks. Prompting also shows great performance compared to traditional fine-tuning when adapted to zero-shot or few-shot scenarios where the number of annotated data is limited. In this framework, the role of verbalizers is essential, as an interpretation from masked word distributions into output predictions. In this work, we propose \acrshort{mave}, an approach for verbalizer construction by enrichment of class labels using neighborhood relation in the embedding space of words for the text classification task. In addition, we elaborate a benchmarking procedure to evaluate typical baselines of verbalizers for document classification in few-shot learning contexts. Our model achieves state-of-the-art results while using significantly fewer resources. We show that our approach is particularly effective in cases with extremely limited supervision data.<|reference_end|>
arxiv
@article{nguyen2024manual, title={Manual Verbalizer Enrichment for Few-Shot Text Classification}, author={Quang Anh Nguyen, Nadi Tomeh, Mustapha Lebbah, Thierry Charnois, Hanene Azzag, Santiago Cordoba Mu~noz}, journal={arXiv preprint arXiv:2410.06173}, year={2024}, archivePrefix={arXiv}, eprint={2410.06173}, primaryClass={cs.CL cs.AI cs.LG} }
nguyen2024manual
arxiv-667152
2410.06174
Locally energy-stable finite element schemes for incompressible flow problems: Design and analysis for equal-order interpolations
<|reference_start|>Locally energy-stable finite element schemes for incompressible flow problems: Design and analysis for equal-order interpolations: We show that finite element discretizations of incompressible flow problems can be designed to ensure preservation/dissipation of kinetic energy not only globally but also locally. In the context of equal-order (piecewise-linear) interpolations, we prove the validity of a semi-discrete energy inequality for a quadrature-based approximation to the nonlinear convective term, which we combine with the Becker--Hansbo pressure stabilization. An analogy with entropy-stable algebraic flux correction schemes for the compressible Euler equations and the shallow water equations yields a weak `bounded variation' estimate from which we deduce the semi-discrete Lax--Wendroff consistency and convergence towards dissipative weak solutions. The results of our numerical experiments for standard test problems confirm that the method under investigation is non-oscillatory and exhibits optimal convergence behavior.<|reference_end|>
arxiv
@article{hajduk2024locally, title={Locally energy-stable finite element schemes for incompressible flow problems: Design and analysis for equal-order interpolations}, author={Hennes Hajduk, Dmitri Kuzmin, Gert Lube, Philipp "Offner}, journal={arXiv preprint arXiv:2410.06174}, year={2024}, archivePrefix={arXiv}, eprint={2410.06174}, primaryClass={math.NA cs.NA} }
hajduk2024locally
arxiv-667153
2410.06176
SC-Bench: A Large-Scale Dataset for Smart Contract Auditing
<|reference_start|>SC-Bench: A Large-Scale Dataset for Smart Contract Auditing: There is a huge demand to ensure the compliance of smart contracts listed on blockchain platforms to safety and economic standards. Today, manual efforts in the form of auditing are commonly used to achieve this goal. ML-based automated techniques have the promise to alleviate human efforts and the resulting monetary costs. However, unlike other domains where ML techniques have had huge successes, no systematic ML techniques have been proposed or applied to smart contract auditing. We present SC-Bench, the first dataset for automated smart-contract auditing research. SC-Bench consists of 5,377 real-world smart contracts running on Ethereum, a widely used blockchain platform, and 15,975 violations of standards on Ehereum called ERCs. Out of these violations, 139 are real violations programmers made. The remaining are errors we systematically injected to reflect the violations of different ERC rules. We evaluate SC-Bench using GPT-4 by prompting it with both the contracts and ERC rules. In addition, we manually identify each violated rule and the corresponding code site (i.e., oracle) and prompt GPT-4 with the information asking for a True-or-False question. Our results show that without the oracle, GPT-4 can only detect 0.9% violations, and with the oracle, it detects 22.9% violations. These results show the potential room for improvement in ML-based techniques for smart-contract auditing.<|reference_end|>
arxiv
@article{xia2024sc-bench:, title={SC-Bench: A Large-Scale Dataset for Smart Contract Auditing}, author={Shihao Xia, Mengting He, Linhai Song, Yiying Zhang}, journal={arXiv preprint arXiv:2410.06176}, year={2024}, archivePrefix={arXiv}, eprint={2410.06176}, primaryClass={cs.CR cs.AI} }
xia2024sc-bench:
arxiv-667154
2410.06180
CBIDR: A novel method for information retrieval combining image and data by means of TOPSIS applied to medical diagnosis
<|reference_start|>CBIDR: A novel method for information retrieval combining image and data by means of TOPSIS applied to medical diagnosis: Content-Based Image Retrieval (CBIR) have shown promising results in the field of medical diagnosis, which aims to provide support to medical professionals (doctor or pathologist). However, the ultimate decision regarding the diagnosis is made by the medical professional, drawing upon their accumulated experience. In this context, we believe that artificial intelligence can play a pivotal role in addressing the challenges in medical diagnosis not by making the final decision but by assisting in the diagnosis process with the most relevant information. The CBIR methods use similarity metrics to compare feature vectors generated from images using Convolutional Neural Networks (CNNs). In addition to the information contained in medical images, clinical data about the patient is often available and is also relevant in the final decision-making process by medical professionals. In this paper, we propose a novel method named CBIDR, which leverage both medical images and clinical data of patient, combining them through the ranking algorithm TOPSIS. The goal is to aid medical professionals in their final diagnosis by retrieving images and clinical data of patient that are most similar to query data from the database. As a case study, we illustrate our CBIDR for diagnostic of oral cancer including histopathological images and clinical data of patient. Experimental results in terms of accuracy achieved 97.44% in Top-1 and 100% in Top-5 showing the effectiveness of the proposed approach.<|reference_end|>
arxiv
@article{giuri2024cbidr:, title={CBIDR: A novel method for information retrieval combining image and data by means of TOPSIS applied to medical diagnosis}, author={Humberto Giuri and Renato A. Krohling}, journal={arXiv preprint arXiv:2410.06180}, year={2024}, archivePrefix={arXiv}, eprint={2410.06180}, primaryClass={cs.IR cs.AI eess.IV} }
giuri2024cbidr:
arxiv-667155
2410.06186
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
<|reference_start|>The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD: We propose a simple heuristic privacy analysis of noisy clipped stochastic gradient descent (DP-SGD) in the setting where only the last iterate is released and the intermediate iterates remain hidden. Namely, our heuristic assumes a linear structure for the model. We show experimentally that our heuristic is predictive of the outcome of privacy auditing applied to various training procedures. Thus it can be used prior to training as a rough estimate of the final privacy leakage. We also probe the limitations of our heuristic by providing some artificial counterexamples where it underestimates the privacy leakage. The standard composition-based privacy analysis of DP-SGD effectively assumes that the adversary has access to all intermediate iterates, which is often unrealistic. However, this analysis remains the state of the art in practice. While our heuristic does not replace a rigorous privacy analysis, it illustrates the large gap between the best theoretical upper bounds and the privacy auditing lower bounds and sets a target for further work to improve the theoretical privacy analyses. We also empirically support our heuristic and show existing privacy auditing attacks are bounded by our heuristic analysis in both vision and language tasks.<|reference_end|>
arxiv
@article{steinke2024the, title={The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD}, author={Thomas Steinke, Milad Nasr, Arun Ganesh, Borja Balle, Christopher A. Choquette-Choo, Matthew Jagielski, Jamie Hayes, Abhradeep Guha Thakurta, Adam Smith, Andreas Terzis}, journal={arXiv preprint arXiv:2410.06186}, year={2024}, archivePrefix={arXiv}, eprint={2410.06186}, primaryClass={cs.CR cs.LG} }
steinke2024the
arxiv-667156
2410.06187
A column generation algorithm with dynamic constraint aggregation for minimum sum-of-squares clustering
<|reference_start|>A column generation algorithm with dynamic constraint aggregation for minimum sum-of-squares clustering: The minimum sum-of-squares clustering problem (MSSC), also known as $k$-means clustering, refers to the problem of partitioning $n$ data points into $k$ clusters, with the objective of minimizing the total sum of squared Euclidean distances between each point and the center of its assigned cluster. We propose an efficient algorithm for solving large-scale MSSC instances, which combines column generation (CG) with dynamic constraint aggregation (DCA) to effectively reduce the number of constraints considered in the CG master problem. DCA was originally conceived to reduce degeneracy in set partitioning problems by utilizing an aggregated restricted master problem obtained from a partition of the set partitioning constraints into disjoint clusters. In this work, we explore the use of DCA within a CG algorithm for MSSC exact solution. Our method is fine-tuned by a series of ablation studies on DCA design choices, and is demonstrated to significantly outperform existing state-of-the-art exact approaches available in the literature.<|reference_end|>
arxiv
@article{sudoso2024a, title={A column generation algorithm with dynamic constraint aggregation for minimum sum-of-squares clustering}, author={Antonio M. Sudoso, Daniel Aloise}, journal={arXiv preprint arXiv:2410.06187}, year={2024}, archivePrefix={arXiv}, eprint={2410.06187}, primaryClass={math.OC cs.LG} }
sudoso2024a
arxiv-667157
2410.06188
Beyond the Alphabet: Deep Signal Embedding for Enhanced DNA Clustering
<|reference_start|>Beyond the Alphabet: Deep Signal Embedding for Enhanced DNA Clustering: The emerging field of DNA storage employs strands of DNA bases (A/T/C/G) as a storage medium for digital information to enable massive density and durability. The DNA storage pipeline includes: (1) encoding the raw data into sequences of DNA bases; (2) synthesizing the sequences as DNA \textit{strands} that are stored over time as an unordered set; (3) sequencing the DNA strands to generate DNA \textit{reads}; and (4) deducing the original data. The DNA synthesis and sequencing stages each generate several independent error-prone duplicates of each strand which are then utilized in the final stage to reconstruct the best estimate for the original strand. Specifically, the reads are first \textit{clustered} into groups likely originating from the same strand (based on their similarity to each other), and then each group approximates the strand that led to the reads of that group. This work improves the DNA clustering stage by embedding it as part of the DNA sequencing. Traditional DNA storage solutions begin after the DNA sequencing process generates discrete DNA reads (A/T/C/G), yet we identify that there is untapped potential in using the raw signals generated by the Nanopore DNA sequencing machine before they are discretized into bases, a process known as \textit{basecalling}, which is done using a deep neural network. We propose a deep neural network that clusters these signals directly, demonstrating superior accuracy, and reduced computation times compared to current approaches that cluster after basecalling.<|reference_end|>
arxiv
@article{abraham2024beyond, title={Beyond the Alphabet: Deep Signal Embedding for Enhanced DNA Clustering}, author={Hadas Abraham, Barak Gahtan, Adir Kobovich, Orian Leitersdorf, Alex M. Bronstein, Eitan Yaakobi}, journal={arXiv preprint arXiv:2410.06188}, year={2024}, archivePrefix={arXiv}, eprint={2410.06188}, primaryClass={q-bio.GN cs.CE cs.LG} }
abraham2024beyond
arxiv-667158
2410.06190
Neural-Bayesian Program Learning for Few-shot Dialogue Intent Parsing
<|reference_start|>Neural-Bayesian Program Learning for Few-shot Dialogue Intent Parsing: With the growing importance of customer service in contemporary business, recognizing the intents behind service dialogues has become essential for the strategic success of enterprises. However, the nature of dialogue data varies significantly across different scenarios, and implementing an intent parser for a specific domain often involves tedious feature engineering and a heavy workload of data labeling. In this paper, we propose a novel Neural-Bayesian Program Learning model named Dialogue-Intent Parser (DI-Parser), which specializes in intent parsing under data-hungry settings and offers promising performance improvements. DI-Parser effectively utilizes data from multiple sources in a "Learning to Learn" manner and harnesses the "wisdom of the crowd" through few-shot learning capabilities on human-annotated datasets. Experimental results demonstrate that DI-Parser outperforms state-of-the-art deep learning models and offers practical advantages for industrial-scale applications.<|reference_end|>
arxiv
@article{hong2024neural-bayesian, title={Neural-Bayesian Program Learning for Few-shot Dialogue Intent Parsing}, author={Mengze Hong, Di Jiang, Yuanfeng Song, Chen Jason Zhang}, journal={arXiv preprint arXiv:2410.06190}, year={2024}, archivePrefix={arXiv}, eprint={2410.06190}, primaryClass={cs.CL cs.LG} }
hong2024neural-bayesian
arxiv-667159
2410.06191
Benign Overfitting for Regression with Trained Two-Layer ReLU Networks
<|reference_start|>Benign Overfitting for Regression with Trained Two-Layer ReLU Networks: We study the least-square regression problem with a two-layer fully-connected neural network, with ReLU activation function, trained by gradient flow. Our first result is a generalization result, that requires no assumptions on the underlying regression function or the noise other than that they are bounded. We operate in the neural tangent kernel regime, and our generalization result is developed via a decomposition of the excess risk into estimation and approximation errors, viewing gradient flow as an implicit regularizer. This decomposition in the context of neural networks is a novel perspective of gradient descent, and helps us avoid uniform convergence traps. In this work, we also establish that under the same setting, the trained network overfits to the data. Together, these results, establishes the first result on benign overfitting for finite-width ReLU networks for arbitrary regression functions.<|reference_end|>
arxiv
@article{park2024benign, title={Benign Overfitting for Regression with Trained Two-Layer ReLU Networks}, author={Junhyung Park, Patrick Bloebaum, Shiva Prasad Kasiviswanathan}, journal={arXiv preprint arXiv:2410.06191}, year={2024}, archivePrefix={arXiv}, eprint={2410.06191}, primaryClass={cs.LG cs.AI stat.ML} }
park2024benign
arxiv-667160
2410.06192
Hibikino-Musashi@Home 2024 Team Description Paper
<|reference_start|>Hibikino-Musashi@Home 2024 Team Description Paper: This paper provides an overview of the techniques employed by Hibikino-Musashi@Home, which intends to participate in the domestic standard platform league. The team has developed a dataset generator for training a robot vision system and an open-source development environment running on a Human Support Robot simulator. The large language model powered task planner selects appropriate primitive skills to perform the task requested by users. The team aims to design a home service robot that can assist humans in their homes and continuously attends competitions to evaluate and improve the developed system.<|reference_end|>
arxiv
@article{isomoto2024hibikino-musashi@home, title={Hibikino-Musashi@Home 2024 Team Description Paper}, author={Kosei Isomoto and Akinobu Mizutani and Fumiya Matsuzaki and Hikaru Sato and Ikuya Matsumoto and Kosei Yamao and Takuya Kawabata and Tomoya Shiba and Yuga Yano and Atsuki Yokota and Daiju Kanaoka and Hiromasa Yamaguchi and Kazuya Murai and Kim Minje and Lu Shen and Mayo Suzuka and Moeno Anraku and Naoki Yamaguchi and Satsuki Fujimatsu and Shoshi Tokuno and Tadataka Mizo and Tomoaki Fujino and Yuuki Nakadera and Yuka Shishido and Yusuke Nakaoka and Yuichiro Tanaka and Takashi Morie and Hakaru Tamukoh}, journal={arXiv preprint arXiv:2410.06192}, year={2024}, archivePrefix={arXiv}, eprint={2410.06192}, primaryClass={cs.RO} }
isomoto2024hibikino-musashi@home
arxiv-667161
2410.06194
Prompting DirectSAM for Semantic Contour Extraction in Remote Sensing Images
<|reference_start|>Prompting DirectSAM for Semantic Contour Extraction in Remote Sensing Images: The Direct Segment Anything Model (DirectSAM) excels in class-agnostic contour extraction. In this paper, we explore its use by applying it to optical remote sensing imagery, where semantic contour extraction-such as identifying buildings, road networks, and coastlines-holds significant practical value. Those applications are currently handled via training specialized small models separately on small datasets in each domain. We introduce a foundation model derived from DirectSAM, termed DirectSAM-RS, which not only inherits the strong segmentation capability acquired from natural images, but also benefits from a large-scale dataset we created for remote sensing semantic contour extraction. This dataset comprises over 34k image-text-contour triplets, making it at least 30 times larger than individual dataset. DirectSAM-RS integrates a prompter module: a text encoder and cross-attention layers attached to the DirectSAM architecture, which allows flexible conditioning on target class labels or referring expressions. We evaluate the DirectSAM-RS in both zero-shot and fine-tuning setting, and demonstrate that it achieves state-of-the-art performance across several downstream benchmarks.<|reference_end|>
arxiv
@article{miao2024prompting, title={Prompting DirectSAM for Semantic Contour Extraction in Remote Sensing Images}, author={Shiyu Miao, Delong Chen, Fan Liu, Chuanyi Zhang, Yanhui Gu, Shengjie Guo, Jun Zhou}, journal={arXiv preprint arXiv:2410.06194}, year={2024}, archivePrefix={arXiv}, eprint={2410.06194}, primaryClass={cs.CV} }
miao2024prompting
arxiv-667162
2410.06195
Entering Real Social World! Benchmarking the Theory of Mind and Socialization Capabilities of LLMs from a First-person Perspective
<|reference_start|>Entering Real Social World! Benchmarking the Theory of Mind and Socialization Capabilities of LLMs from a First-person Perspective: In the social world, humans possess the capability to infer and reason about others mental states (such as emotions, beliefs, and intentions), known as the Theory of Mind (ToM). Simultaneously, humans own mental states evolve in response to social situations, a capability we refer to as socialization. Together, these capabilities form the foundation of human social interaction. In the era of artificial intelligence (AI), especially with the development of large language models (LLMs), we raise an intriguing question: How do LLMs perform in terms of ToM and socialization capabilities? And more broadly, can these AI models truly enter and navigate the real social world? Existing research evaluating LLMs ToM and socialization capabilities by positioning LLMs as passive observers from a third person perspective, rather than as active participants. However, compared to the third-person perspective, observing and understanding the world from an egocentric first person perspective is a natural approach for both humans and AI agents. The ToM and socialization capabilities of LLMs from a first person perspective, a crucial attribute for advancing embodied AI agents, remain unexplored. To answer the aforementioned questions and bridge the research gap, we introduce EgoSocialArena, a novel framework designed to evaluate and investigate the ToM and socialization capabilities of LLMs from a first person perspective. It encompasses two evaluation environments: static environment and interactive environment, with seven scenarios: Daily Life, Counterfactual, New World, Blackjack, Number Guessing, and Limit Texas Hold em, totaling 2,195 data entries. With EgoSocialArena, we have conducted a comprehensive evaluation of nine advanced LLMs and observed some key insights regarding the future development of LLMs as well as the capabilities levels of the most advanced LLMs currently available.<|reference_end|>
arxiv
@article{hou2024entering, title={Entering Real Social World! Benchmarking the Theory of Mind and Socialization Capabilities of LLMs from a First-person Perspective}, author={Guiyang Hou, Wenqi Zhang, Yongliang Shen, Zeqi Tan, Sihao Shen, Weiming Lu}, journal={arXiv preprint arXiv:2410.06195}, year={2024}, archivePrefix={arXiv}, eprint={2410.06195}, primaryClass={cs.CL cs.AI} }
hou2024entering
arxiv-667163
2410.06203
Integrating Planning into Single-Turn Long-Form Text Generation
<|reference_start|>Integrating Planning into Single-Turn Long-Form Text Generation: Generating high-quality, in-depth textual documents, such as academic papers, news articles, Wikipedia entries, and books, remains a significant challenge for Large Language Models (LLMs). In this paper, we propose to use planning to generate long form content. To achieve our goal, we generate intermediate steps via an auxiliary task that teaches the LLM to plan, reason and structure before generating the final text. Our main novelty lies in a single auxiliary task that does not require multiple rounds of prompting or planning. To overcome the scarcity of training data for these intermediate steps, we leverage LLMs to generate synthetic intermediate writing data such as outlines, key information and summaries from existing full articles. Our experiments demonstrate on two datasets from different domains, namely the scientific news dataset SciNews and Wikipedia datasets in KILT-Wiki and FreshWiki, that LLMs fine-tuned with the auxiliary task generate higher quality documents. We observed +2.5% improvement in ROUGE-Lsum, and a strong 3.60 overall win/loss ratio via human SxS evaluation, with clear wins in organization, relevance, and verifiability.<|reference_end|>
arxiv
@article{liang2024integrating, title={Integrating Planning into Single-Turn Long-Form Text Generation}, author={Yi Liang, You Wu, Honglei Zhuang, Li Chen, Jiaming Shen, Yiling Jia, Zhen Qin, Sumit Sanghai, Xuanhui Wang, Carl Yang, Michael Bendersky}, journal={arXiv preprint arXiv:2410.06203}, year={2024}, archivePrefix={arXiv}, eprint={2410.06203}, primaryClass={cs.CL cs.AI} }
liang2024integrating
arxiv-667164
2410.06205
Round and Round We Go! What makes Rotary Positional Encodings useful?
<|reference_start|>Round and Round We Go! What makes Rotary Positional Encodings useful?: Positional Encodings (PEs) are a critical component of Transformer-based Large Language Models (LLMs), providing the attention mechanism with important sequence-position information. One of the most popular types of encoding used today in LLMs are Rotary Positional Encodings (RoPE), that rotate the queries and keys based on their relative distance. A common belief is that RoPE is useful because it helps to decay token dependency as relative distance increases. In this work, we argue that this is unlikely to be the core reason. We study the internals of a trained Gemma 7B model to understand how RoPE is being used at a mechanical level. We find that Gemma learns to use RoPE to construct robust "positional" attention patterns by exploiting the highest frequencies. We also find that, in general, Gemma greatly prefers to use the lowest frequencies of RoPE, which we suspect are used to carry semantic information. We mathematically prove interesting behaviours of RoPE and conduct experiments to verify our findings, proposing a modification of RoPE that fixes some highlighted issues and improves performance. We believe that this work represents an interesting step in better understanding PEs in LLMs, which we believe holds crucial value for scaling LLMs to large sizes and context lengths.<|reference_end|>
arxiv
@article{barbero2024round, title={Round and Round We Go! What makes Rotary Positional Encodings useful?}, author={Federico Barbero, Alex Vitvitskyi, Christos Perivolaropoulos, Razvan Pascanu, Petar Veliv{c}kovi'c}, journal={arXiv preprint arXiv:2410.06205}, year={2024}, archivePrefix={arXiv}, eprint={2410.06205}, primaryClass={cs.CL cs.LG} }
barbero2024round
arxiv-667165
2410.06209
LeanAgent: Lifelong Learning for Formal Theorem Proving
<|reference_start|>LeanAgent: Lifelong Learning for Formal Theorem Proving: Large Language Models (LLMs) have been successful in mathematical reasoning tasks such as formal theorem proving when integrated with interactive proof assistants like Lean. Existing approaches involve training or fine-tuning an LLM on a specific dataset to perform well on particular domains, such as undergraduate-level mathematics. These methods struggle with generalizability to advanced mathematics. A fundamental limitation is that these approaches operate on static domains, failing to capture how mathematicians often work across multiple domains and projects simultaneously or cyclically. We present LeanAgent, a novel lifelong learning framework for theorem proving that continuously generalizes to and improves on ever-expanding mathematical knowledge without forgetting previously learned knowledge. LeanAgent introduces several key innovations, including a curriculum learning strategy that optimizes the learning trajectory in terms of mathematical difficulty, a dynamic database for efficient management of evolving mathematical knowledge, and progressive training to balance stability and plasticity. LeanAgent successfully proves 162 theorems previously unproved by humans across 23 diverse Lean repositories, many from advanced mathematics. It performs up to 11$\times$ better than the static LLM baseline, proving challenging theorems in domains like abstract algebra and algebraic topology while showcasing a clear progression of learning from basic concepts to advanced topics. In addition, we analyze LeanAgent's superior performance on key lifelong learning metrics. LeanAgent achieves exceptional scores in stability and backward transfer, where learning new tasks improves performance on previously learned tasks. This emphasizes LeanAgent's continuous generalizability and improvement, explaining its superior theorem proving performance.<|reference_end|>
arxiv
@article{kumarappan2024leanagent:, title={LeanAgent: Lifelong Learning for Formal Theorem Proving}, author={Adarsh Kumarappan, Mo Tiwari, Peiyang Song, Robert Joseph George, Chaowei Xiao, Anima Anandkumar}, journal={arXiv preprint arXiv:2410.06209}, year={2024}, archivePrefix={arXiv}, eprint={2410.06209}, primaryClass={cs.LG cs.AI cs.LO} }
kumarappan2024leanagent:
arxiv-667166
2410.06211
A mechanistically interpretable neural network for regulatory genomics
<|reference_start|>A mechanistically interpretable neural network for regulatory genomics: Deep neural networks excel in mapping genomic DNA sequences to associated readouts (e.g., protein-DNA binding). Beyond prediction, the goal of these networks is to reveal to scientists the underlying motifs (and their syntax) which drive genome regulation. Traditional methods that extract motifs from convolutional filters suffer from the uninterpretable dispersion of information across filters and layers. Other methods which rely on importance scores can be unstable and unreliable. Instead, we designed a novel mechanistically interpretable architecture for regulatory genomics, where motifs and their syntax are directly encoded and readable from the learned weights and activations. We provide theoretical and empirical evidence of our architecture's full expressivity, while still being highly interpretable. Through several experiments, we show that our architecture excels in de novo motif discovery and motif instance calling, is robust to variable sequence contexts, and enables fully interpretable generation of novel functional sequences.<|reference_end|>
arxiv
@article{tseng2024a, title={A mechanistically interpretable neural network for regulatory genomics}, author={Alex M. Tseng, Gokcen Eraslan, Tommaso Biancalani, Gabriele Scalia}, journal={arXiv preprint arXiv:2410.06211}, year={2024}, archivePrefix={arXiv}, eprint={2410.06211}, primaryClass={q-bio.GN cs.LG} }
tseng2024a
arxiv-667167
2410.06212
Solving robust MDPs as a sequence of static RL problems
<|reference_start|>Solving robust MDPs as a sequence of static RL problems: Designing control policies whose performance level is guaranteed to remain above a given threshold in a span of environments is a critical feature for the adoption of reinforcement learning (RL) in real-world applications. The search for such robust policies is a notoriously difficult problem, related to the so-called dynamic model of transition function uncertainty, where the environment dynamics are allowed to change at each time step. But in practical cases, one is rather interested in robustness to a span of static transition models throughout interaction episodes. The static model is known to be harder to solve than the dynamic one, and seminal algorithms, such as robust value iteration, as well as most recent works on deep robust RL, build upon the dynamic model. In this work, we propose to revisit the static model. We suggest an analysis of why solving the static model under some mild hypotheses is a reasonable endeavor, based on an equivalence with the dynamic model, and formalize the general intuition that robust MDPs can be solved by tackling a series of static problems. We introduce a generic meta-algorithm called IWOCS, which incrementally identifies worst-case transition models so as to guide the search for a robust policy. Discussion on IWOCS sheds light on new ways to decouple policy optimization and adversarial transition functions and opens new perspectives for analysis. We derive a deep RL version of IWOCS and demonstrate it is competitive with state-of-the-art algorithms on classical benchmarks.<|reference_end|>
arxiv
@article{zouitine2024solving, title={Solving robust MDPs as a sequence of static RL problems}, author={Adil Zouitine, Matthieu Geist, Emmanuel Rachelson}, journal={arXiv preprint arXiv:2410.06212}, year={2024}, archivePrefix={arXiv}, eprint={2410.06212}, primaryClass={cs.LG} }
zouitine2024solving
arxiv-667168
2410.06213
RL, but don't do anything I wouldn't do
<|reference_start|>RL, but don't do anything I wouldn't do: In reinforcement learning, if the agent's reward differs from the designers' true utility, even only rarely, the state distribution resulting from the agent's policy can be very bad, in theory and in practice. When RL policies would devolve into undesired behavior, a common countermeasure is KL regularization to a trusted policy ("Don't do anything I wouldn't do"). All current cutting-edge language models are RL agents that are KL-regularized to a "base policy" that is purely predictive. Unfortunately, we demonstrate that when this base policy is a Bayesian predictive model of a trusted policy, the KL constraint is no longer reliable for controlling the behavior of an advanced RL agent. We demonstrate this theoretically using algorithmic information theory, and while systems today are too weak to exhibit this theorized failure precisely, we RL-finetune a language model and find evidence that our formal results are plausibly relevant in practice. We also propose a theoretical alternative that avoids this problem by replacing the "Don't do anything I wouldn't do" principle with "Don't do anything I mightn't do".<|reference_end|>
arxiv
@article{cohen2024rl,, title={RL, but don't do anything I wouldn't do}, author={Michael K. Cohen, Marcus Hutter, Yoshua Bengio, Stuart Russell}, journal={arXiv preprint arXiv:2410.06213}, year={2024}, archivePrefix={arXiv}, eprint={2410.06213}, primaryClass={cs.LG} }
cohen2024rl,
arxiv-667169
2410.06214
Fair-OBNC: Correcting Label Noise for Fairer Datasets
<|reference_start|>Fair-OBNC: Correcting Label Noise for Fairer Datasets: Data used by automated decision-making systems, such as Machine Learning models, often reflects discriminatory behavior that occurred in the past. These biases in the training data are sometimes related to label noise, such as in COMPAS, where more African-American offenders are wrongly labeled as having a higher risk of recidivism when compared to their White counterparts. Models trained on such biased data may perpetuate or even aggravate the biases with respect to sensitive information, such as gender, race, or age. However, while multiple label noise correction approaches are available in the literature, these focus on model performance exclusively. In this work, we propose Fair-OBNC, a label noise correction method with fairness considerations, to produce training datasets with measurable demographic parity. The presented method adapts Ordering-Based Noise Correction, with an adjusted criterion of ordering, based both on the margin of error of an ensemble, and the potential increase in the observed demographic parity of the dataset. We evaluate Fair-OBNC against other different pre-processing techniques, under different scenarios of controlled label noise. Our results show that the proposed method is the overall better alternative within the pool of label correction methods, being capable of attaining better reconstructions of the original labels. Models trained in the corrected data have an increase, on average, of 150% in demographic parity, when compared to models trained in data with noisy labels, across the considered levels of label noise.<|reference_end|>
arxiv
@article{silva2024fair-obnc:, title={Fair-OBNC: Correcting Label Noise for Fairer Datasets}, author={In^es Oliveira e Silva, S'ergio Jesus, Hugo Ferreira, Pedro Saleiro, In^es Sousa, Pedro Bizarro and Carlos Soares}, journal={arXiv preprint arXiv:2410.06214}, year={2024}, archivePrefix={arXiv}, eprint={2410.06214}, primaryClass={cs.LG} }
silva2024fair-obnc:
arxiv-667170
2410.06215
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
<|reference_start|>DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback: The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using LLMs as annotators reduce human effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents - or teachers - is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides student feedback. The agent's goal is to improve student performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. DataEnvGym includes multiple teacher environment instantiations across 3 levels of structure in the state representation and action space. More structured environments are based on inferred skills and offer more interpretability and curriculum control. We support 3 diverse tasks (math, code, and VQA) and test multiple students and teachers. Example agents in our teaching environments can iteratively improve students across tasks and settings. Moreover, we show that environments teach different skill levels and test variants of key modules, pointing to future work in improving data generation agents, engines, and feedback mechanisms.<|reference_end|>
arxiv
@article{khan2024dataenvgym:, title={DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback}, author={Zaid Khan, Elias Stengel-Eskin, Jaemin Cho, Mohit Bansal}, journal={arXiv preprint arXiv:2410.06215}, year={2024}, archivePrefix={arXiv}, eprint={2410.06215}, primaryClass={cs.CL cs.AI cs.LG} }
khan2024dataenvgym:
arxiv-667171
2410.06219
Gaussian Variational Schemes on Bounded and Unbounded Domains
<|reference_start|>Gaussian Variational Schemes on Bounded and Unbounded Domains: A machine-learnable variational scheme using Gaussian radial basis functions (GRBFs) is presented and used to approximate linear problems on bounded and unbounded domains. In contrast to standard mesh-free methods, which use GRBFs to discretize strong-form differential equations, this work exploits the relationship between integrals of GRBFs, their derivatives, and polynomial moments to produce exact quadrature formulae which enable weak-form expressions. Combined with trainable GRBF means and covariances, this leads to a flexible, generalized Galerkin variational framework which is applied in the infinite-domain setting where the scheme is conforming, as well as the bounded-domain setting where it is not. Error rates for the proposed GRBF scheme are derived in each case, and examples are presented demonstrating utility of this approach as a surrogate modeling technique.<|reference_end|>
arxiv
@article{actor2024gaussian, title={Gaussian Variational Schemes on Bounded and Unbounded Domains}, author={Jonas A. Actor, Anthony Gruber, Eric C. Cyr, Nathaniel Trask}, journal={arXiv preprint arXiv:2410.06219}, year={2024}, archivePrefix={arXiv}, eprint={2410.06219}, primaryClass={math.NA cs.LG cs.NA} }
actor2024gaussian
arxiv-667172
2410.06221
POLIPHONE: A Dataset for Smartphone Model Identification from Audio Recordings
<|reference_start|>POLIPHONE: A Dataset for Smartphone Model Identification from Audio Recordings: When dealing with multimedia data, source attribution is a key challenge from a forensic perspective. This task aims to determine how a given content was captured, providing valuable insights for various applications, including legal proceedings and integrity investigations. The source attribution problem has been addressed in different domains, from identifying the camera model used to capture specific photographs to detecting the synthetic speech generator or microphone model used to create or record given audio tracks. Recent advancements in this area rely heavily on machine learning and data-driven techniques, which often outperform traditional signal processing-based methods. However, a drawback of these systems is their need for large volumes of training data, which must reflect the latest technological trends to produce accurate and reliable predictions. This presents a significant challenge, as the rapid pace of technological progress makes it difficult to maintain datasets that are up-to-date with real-world conditions. For instance, in the task of smartphone model identification from audio recordings, the available datasets are often outdated or acquired inconsistently, making it difficult to develop solutions that are valid beyond a research environment. In this paper we present POLIPHONE, a dataset for smartphone model identification from audio recordings. It includes data from 20 recent smartphones recorded in a controlled environment to ensure reproducibility and scalability for future research. The released tracks contain audio data from various domains (i.e., speech, music, environmental sounds), making the corpus versatile and applicable to a wide range of use cases. We also present numerous experiments to benchmark the proposed dataset using a state-of-the-art classifier for smartphone model identification from audio recordings.<|reference_end|>
arxiv
@article{salvi2024poliphone:, title={POLIPHONE: A Dataset for Smartphone Model Identification from Audio Recordings}, author={Davide Salvi, Daniele Ugo Leonzio, Antonio Giganti, Claudio Eutizi, Sara Mandelli, Paolo Bestagini, Stefano Tubaro}, journal={arXiv preprint arXiv:2410.06221}, year={2024}, archivePrefix={arXiv}, eprint={2410.06221}, primaryClass={cs.SD cs.MM eess.AS} }
salvi2024poliphone:
arxiv-667173
2410.06224
The Fast M\"obius Transform: An algebraic approach to information decomposition
<|reference_start|>The Fast M\"obius Transform: An algebraic approach to information decomposition: The partial information decomposition (PID) and its extension integrated information decomposition ($\Phi$ID) are promising frameworks to investigate information phenomena involving multiple variables. An important limitation of these approaches is the high computational cost involved in their calculation. Here we leverage fundamental algebraic properties of these decompositions to enable a computationally-efficient method to estimate them, which we call the fast M\"obius transform. Our approach is based on a novel formula for estimating the M\"obius function that circumvents important computational bottlenecks. We showcase the capabilities of this approach by presenting two analyses that would be unfeasible without this method: decomposing the information that neural activity at different frequency bands yield about the brain's macroscopic functional organisation, and identifying distinctive dynamical properties of the interactions between multiple voices in baroque music. Overall, our proposed approach illuminates the value of algebraic facets of information decomposition and opens the way to a wide range of future analyses.<|reference_end|>
arxiv
@article{jansma2024the, title={The Fast M\"obius Transform: An algebraic approach to information decomposition}, author={Abel Jansma, Pedro A. M. Mediano, Fernando E. Rosas}, journal={arXiv preprint arXiv:2410.06224}, year={2024}, archivePrefix={arXiv}, eprint={2410.06224}, primaryClass={cs.IT math.IT} }
jansma2024the
arxiv-667174
2410.06225
A Timeline and Analysis for Representation Plasticity in Large Language Models
<|reference_start|>A Timeline and Analysis for Representation Plasticity in Large Language Models: The ability to steer AI behavior is crucial to preventing its long term dangerous and catastrophic potential. Representation Engineering (RepE) has emerged as a novel, powerful method to steer internal model behaviors, such as "honesty", at a top-down level. Understanding the steering of representations should thus be placed at the forefront of alignment initiatives. Unfortunately, current efforts to understand plasticity at this level are highly neglected. This paper aims to bridge the knowledge gap and understand how LLM representation stability, specifically for the concept of "honesty", and model plasticity evolve by applying steering vectors extracted at different fine-tuning stages, revealing differing magnitudes of shifts in model behavior. The findings are pivotal, showing that while early steering exhibits high plasticity, later stages have a surprisingly responsive critical window. This pattern is observed across different model architectures, signaling that there is a general pattern of model plasticity that can be used for effective intervention. These insights greatly contribute to the field of AI transparency, addressing a pressing lack of efficiency limiting our ability to effectively steer model behavior.<|reference_end|>
arxiv
@article{kannan2024a, title={A Timeline and Analysis for Representation Plasticity in Large Language Models}, author={Akshat Kannan}, journal={arXiv preprint arXiv:2410.06225}, year={2024}, archivePrefix={arXiv}, eprint={2410.06225}, primaryClass={cs.LG cs.AI} }
kannan2024a
arxiv-667175
2410.06231
RelitLRM: Generative Relightable Radiance for Large Reconstruction Models
<|reference_start|>RelitLRM: Generative Relightable Radiance for Large Reconstruction Models: We propose RelitLRM, a Large Reconstruction Model (LRM) for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse (4-8) posed images captured under unknown static lighting. Unlike prior inverse rendering methods requiring dense captures and slow optimization, often causing artifacts like incorrect highlights or shadow baking, RelitLRM adopts a feed-forward transformer-based model with a novel combination of a geometry reconstructor and a relightable appearance generator based on diffusion. The model is trained end-to-end on synthetic multi-view renderings of objects under varying known illuminations. This architecture design enables to effectively decompose geometry and appearance, resolve the ambiguity between material and lighting, and capture the multi-modal distribution of shadows and specularity in the relit appearance. We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines while being significantly faster. Our project page is available at: https://relit-lrm.github.io/.<|reference_end|>
arxiv
@article{zhang2024relitlrm:, title={RelitLRM: Generative Relightable Radiance for Large Reconstruction Models}, author={Tianyuan Zhang, Zhengfei Kuang, Haian Jin, Zexiang Xu, Sai Bi, Hao Tan, He Zhang, Yiwei Hu, Milos Hasan, William T. Freeman, Kai Zhang, Fujun Luan}, journal={arXiv preprint arXiv:2410.06231}, year={2024}, archivePrefix={arXiv}, eprint={2410.06231}, primaryClass={cs.CV cs.GR cs.LG} }
zhang2024relitlrm:
arxiv-667176
2410.06232
Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations
<|reference_start|>Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations: Why do biological and artificial neurons sometimes modularise, each encoding a single meaningful variable, and sometimes entangle their representation of many variables? In this work, we develop a theory of when biologically inspired representations -- those that are nonnegative and energy efficient -- modularise with respect to source variables (sources). We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise. Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work. Rather, we show that sources modularise if their support is "sufficiently spread". From this theory, we extract and validate predictions in a variety of empirical studies on how data distribution affects modularisation in nonlinear feedforward and recurrent neural networks trained on supervised and unsupervised tasks. Furthermore, we apply these ideas to neuroscience data. First, we explain why two studies that recorded prefrontal activity in working memory tasks conflict on whether memories are encoded in orthogonal subspaces: the support of the sources differed due to a critical discrepancy in experimental protocol. Second, we use similar arguments to understand why preparatory and potent subspaces in RNN models of motor cortex are only sometimes orthogonal. Third, we study spatial and reward information mixing in entorhinal recordings, and show our theory matches data better than previous work. And fourth, we suggest a suite of surprising settings in which neurons can be (or appear) mixed selective, without requiring complex nonlinear readouts as in traditional theories. In sum, our theory prescribes precise conditions on when neural activities modularise, providing tools for inducing and elucidating modular representations in brains and machines.<|reference_end|>
arxiv
@article{dorrell2024don't, title={Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations}, author={Will Dorrell and Kyle Hsu and Luke Hollingsworth and Jin Hwa Lee and Jiajun Wu and Chelsea Finn and Peter E Latham and Tim EJ Behrens and James CR Whittington}, journal={arXiv preprint arXiv:2410.06232}, year={2024}, archivePrefix={arXiv}, eprint={2410.06232}, primaryClass={q-bio.NC cs.AI cs.LG cs.NE} }
dorrell2024don't
arxiv-667177
2410.06233
A Generalized Metriplectic System via Free Energy and System~Identification via Bilevel Convex Optimization
<|reference_start|>A Generalized Metriplectic System via Free Energy and System~Identification via Bilevel Convex Optimization: This work generalizes the classical metriplectic formalism to model Hamiltonian systems with nonconservative dissipation. Classical metriplectic representations allow for the description of energy conservation and production of entropy via a suitable selection of an entropy function and a bilinear symmetric metric. By relaxing the Casimir invariance requirement of the entropy function, this paper shows that the generalized formalism induces the free energy analogous to thermodynamics. The monotonic change of free energy can serve as a more precise criterion than mechanical energy or entropy alone. This paper provides examples of the generalized metriplectic system in a 2-dimensional Hamiltonian system and $\mathrm{SO}(3)$. This paper also provides a bilevel convex optimization approach for the identification of the metriplectic system given measurements of the system.<|reference_end|>
arxiv
@article{teng2024a, title={A Generalized Metriplectic System via Free Energy and System~Identification via Bilevel Convex Optimization}, author={Sangli Teng, Kaito Iwasaki, William Clark, Xihang Yu, Anthony Bloch, Ram Vasudevan, Maani Ghaffari}, journal={arXiv preprint arXiv:2410.06233}, year={2024}, archivePrefix={arXiv}, eprint={2410.06233}, primaryClass={eess.SY cs.SY} }
teng2024a
arxiv-667178
2410.06234
TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data
<|reference_start|>TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data: Large vision and language assistants have enabled new capabilities for interpreting natural images. These approaches have recently been adapted to earth observation data, but they are only able to handle single image inputs, limiting their use for many real-world tasks. In this work, we develop a new vision and language assistant called TEOChat that can engage in conversations about temporal sequences of earth observation data. To train TEOChat, we curate an instruction-following dataset composed of many single image and temporal tasks including building change and damage assessment, semantic change detection, and temporal scene classification. We show that TEOChat can perform a wide variety of spatial and temporal reasoning tasks, substantially outperforming previous vision and language assistants, and even achieving comparable or better performance than specialist models trained to perform these specific tasks. Furthermore, TEOChat achieves impressive zero-shot performance on a change detection and change question answering dataset, outperforms GPT-4o and Gemini 1.5 Pro on multiple temporal tasks, and exhibits stronger single image capabilities than a comparable single EO image instruction-following model. We publicly release our data, models, and code at https://github.com/ermongroup/TEOChat .<|reference_end|>
arxiv
@article{irvin2024teochat:, title={TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data}, author={Jeremy Andrew Irvin, Emily Ruoyu Liu, Joyce Chuyi Chen, Ines Dormoy, Jinyoung Kim, Samar Khanna, Zhuo Zheng, Stefano Ermon}, journal={arXiv preprint arXiv:2410.06234}, year={2024}, archivePrefix={arXiv}, eprint={2410.06234}, primaryClass={cs.CV cs.AI cs.LG} }
irvin2024teochat:
arxiv-667179
2410.06235
Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning
<|reference_start|>Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning: As artificial intelligence (AI) systems advance, we move towards broad AI: systems capable of performing well on diverse tasks, understanding context, and adapting rapidly to new scenarios. A central challenge for broad AI systems is to generalize over tasks in related domains and being robust to distribution shifts. Neuro-symbolic (NeSy) AI bridges the gap between symbolic and sub-symbolic paradigms to address these challenges, enabling adaptable, generalizable, and more interpretable systems. The development of broad AI requires advancements in domain adaptation (DA), enabling models trained on source domains to effectively generalize to unseen target domains. Traditional approaches often rely on parameter optimization and fine-tuning, which can be impractical due to high costs and risks of catastrophic forgetting. NeSy AI systems use multiple models and methods to generalize to unseen domains and maintain performance across varying conditions. We analyze common DA and NeSy approaches with a focus on deep domain-invariant learning, extending to real-world challenges such as adapting to continuously changing domains and handling large domain gaps. We showcase state-of-the-art model-selection methods for scenarios with limited samples and introduce domain-specific adaptations without gradient-based updates for cases where model tuning is infeasible. This work establishes a framework for scalable and generalizable broad AI systems applicable across various problem settings, demonstrating how symbolic reasoning and large language models can build universal computational graphs that generalize across domains and problems, contributing to more adaptable AI approaches for real-world applications.<|reference_end|>
arxiv
@article{dinu2024parameter, title={Parameter Choice and Neuro-Symbolic Approaches for Deep Domain-Invariant Learning}, author={Marius-Constantin Dinu}, journal={arXiv preprint arXiv:2410.06235}, year={2024}, archivePrefix={arXiv}, eprint={2410.06235}, primaryClass={cs.LG} }
dinu2024parameter
arxiv-667180
2410.06236
SD-$\pi$XL: Generating Low-Resolution Quantized Imagery via Score Distillation
<|reference_start|>SD-$\pi$XL: Generating Low-Resolution Quantized Imagery via Score Distillation: Low-resolution quantized imagery, such as pixel art, is seeing a revival in modern applications ranging from video game graphics to digital design and fabrication, where creativity is often bound by a limited palette of elemental units. Despite their growing popularity, the automated generation of quantized images from raw inputs remains a significant challenge, often necessitating intensive manual input. We introduce SD-$\pi$XL, an approach for producing quantized images that employs score distillation sampling in conjunction with a differentiable image generator. Our method enables users to input a prompt and optionally an image for spatial conditioning, set any desired output size $H \times W$, and choose a palette of $n$ colors or elements. Each color corresponds to a distinct class for our generator, which operates on an $H \times W \times n$ tensor. We adopt a softmax approach, computing a convex sum of elements, thus rendering the process differentiable and amenable to backpropagation. We show that employing Gumbel-softmax reparameterization allows for crisp pixel art effects. Unique to our method is the ability to transform input images into low-resolution, quantized versions while retaining their key semantic features. Our experiments validate SD-$\pi$XL's performance in creating visually pleasing and faithful representations, consistently outperforming the current state-of-the-art. Furthermore, we showcase SD-$\pi$XL's practical utility in fabrication through its applications in interlocking brick mosaic, beading and embroidery design.<|reference_end|>
arxiv
@article{binninger2024sd-$\pi$xl:, title={SD-$\pi$XL: Generating Low-Resolution Quantized Imagery via Score Distillation}, author={Alexandre Binninger and Olga Sorkine-Hornung}, journal={arXiv preprint arXiv:2410.06236}, year={2024}, doi={10.1145/3680528.3687570}, archivePrefix={arXiv}, eprint={2410.06236}, primaryClass={cs.CV cs.GR} }
binninger2024sd-$\pi$xl:
arxiv-667181
2410.06237
BUMBLE: Unifying Reasoning and Acting with Vision-Language Models for Building-wide Mobile Manipulation
<|reference_start|>BUMBLE: Unifying Reasoning and Acting with Vision-Language Models for Building-wide Mobile Manipulation: To operate at a building scale, service robots must perform very long-horizon mobile manipulation tasks by navigating to different rooms, accessing different floors, and interacting with a wide and unseen range of everyday objects. We refer to these tasks as Building-wide Mobile Manipulation. To tackle these inherently long-horizon tasks, we introduce BUMBLE, a unified Vision-Language Model (VLM)-based framework integrating open-world RGBD perception, a wide spectrum of gross-to-fine motor skills, and dual-layered memory. Our extensive evaluation (90+ hours) indicates that BUMBLE outperforms multiple baselines in long-horizon building-wide tasks that require sequencing up to 12 ground truth skills spanning 15 minutes per trial. BUMBLE achieves 47.1% success rate averaged over 70 trials in different buildings, tasks, and scene layouts from different starting rooms and floors. Our user study demonstrates 22% higher satisfaction with our method than state-of-the-art mobile manipulation methods. Finally, we demonstrate the potential of using increasingly-capable foundation models to push performance further. For more information, see https://robin-lab.cs.utexas.edu/BUMBLE/<|reference_end|>
arxiv
@article{shah2024bumble:, title={BUMBLE: Unifying Reasoning and Acting with Vision-Language Models for Building-wide Mobile Manipulation}, author={Rutav Shah, Albert Yu, Yifeng Zhu, Yuke Zhu, Roberto Mart'in-Mart'in}, journal={arXiv preprint arXiv:2410.06237}, year={2024}, archivePrefix={arXiv}, eprint={2410.06237}, primaryClass={cs.RO cs.AI} }
shah2024bumble:
arxiv-667182
2410.06238
EVOLvE: Evaluating and Optimizing LLMs For Exploration
<|reference_start|>EVOLvE: Evaluating and Optimizing LLMs For Exploration: Despite their success in many domains, large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty. This is crucial as many real-world applications, ranging from personalized recommendations to healthcare interventions, demand that LLMs not only predict but also actively learn to make optimal decisions through exploration. In this work, we measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications. We develop a comprehensive suite of environments, including both context-free and contextual bandits with varying task difficulties, to benchmark LLMs' performance. Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs: by providing explicit algorithm-guided support during inference; and through algorithm distillation via in-context demonstrations and fine-tuning, using synthetic data generated from these algorithms. Impressively, these techniques allow us to achieve superior exploration performance with smaller models, surpassing larger models on various tasks. We conducted an extensive ablation study to shed light on various factors, such as task difficulty and data representation, that influence the efficiency of LLM exploration. Additionally, we conduct a rigorous analysis of the LLM's exploration efficiency using the concept of regret, linking its ability to explore to the model size and underlying algorithm.<|reference_end|>
arxiv
@article{nie2024evolve:, title={EVOLvE: Evaluating and Optimizing LLMs For Exploration}, author={Allen Nie, Yi Su, Bo Chang, Jonathan N. Lee, Ed H. Chi, Quoc V. Le, Minmin Chen}, journal={arXiv preprint arXiv:2410.06238}, year={2024}, archivePrefix={arXiv}, eprint={2410.06238}, primaryClass={cs.LG cs.AI cs.CL} }
nie2024evolve:
arxiv-667183
2410.06239
OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs
<|reference_start|>OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs: Enabling robots to autonomously navigate unknown, complex, dynamic environments and perform diverse tasks remains a fundamental challenge in developing robust autonomous physical agents. They must effectively perceive their surroundings while leveraging world knowledge for decision-making. While recent approaches utilize vision-language and large language models for scene understanding and planning, they often rely on offline processing, external computing, or restrictive environmental assumptions. We present a novel framework for efficient and scalable real-time, onboard autonomous navigation that integrates multi-level abstraction in both perception and planning in unknown large-scale environments that change over time. Our system fuses data from multiple onboard sensors for localization and mapping and integrates it with open-vocabulary semantics to generate hierarchical scene graphs. An LLM-based planner leverages these graphs to generate high-level task execution strategies, which guide low-level controllers in safely accomplishing goals. Our framework's real-time operation enables continuous updates to scene graphs and plans, allowing swift responses to environmental changes and on-the-fly error correction. This is a key advantage over static or rule-based planning systems. We demonstrate our system's efficacy on a quadruped robot navigating large-scale, dynamic environments, showcasing its adaptability and robustness in diverse scenarios.<|reference_end|>
arxiv
@article{devarakonda2024orionnav:, title={OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs}, author={Venkata Naren Devarakonda, Raktim Gautam Goswami, Ali Umut Kaypak, Naman Patel, Rooholla Khorrambakht, Prashanth Krishnamurthy, Farshad Khorrami}, journal={arXiv preprint arXiv:2410.06239}, year={2024}, archivePrefix={arXiv}, eprint={2410.06239}, primaryClass={cs.RO} }
devarakonda2024orionnav:
arxiv-667184
2410.06240
Using Crank-Nikolson Scheme to Solve the Korteweg-de Vries (KdV) Equation
<|reference_start|>Using Crank-Nikolson Scheme to Solve the Korteweg-de Vries (KdV) Equation: The Korteweg-de Vries (KdV) equation is a fundamental partial differential equation that models wave propagation in shallow water and other dispersive media. Accurately solving the KdV equation is essential for understanding wave dynamics in physics and engineering applications. This project focuses on implementing the Crank-Nicolson scheme, a finite difference method known for its stability and accuracy, to solve the KdV equation. The Crank-Nicolson scheme's implicit nature allows for a more stable numerical solution, especially in handling the dispersive and nonlinear terms of the KdV equation. We investigate the performance of the scheme through various test cases, analyzing its convergence and error behavior. The results demonstrate that the Crank-Nicolson method provides a robust approach for solving the KdV equation, with improved accuracy over traditional explicit methods. Code is available at the end of the paper.<|reference_end|>
arxiv
@article{wu2024using, title={Using Crank-Nikolson Scheme to Solve the Korteweg-de Vries (KdV) Equation}, author={Qiming Wu}, journal={arXiv preprint arXiv:2410.06240}, year={2024}, archivePrefix={arXiv}, eprint={2410.06240}, primaryClass={math.NA cs.AI cs.NA} }
wu2024using
arxiv-667185
2410.06241
BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
<|reference_start|>BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way: The text-to-video (T2V) generation models, offering convenient visual creation, have recently garnered increasing attention. Despite their substantial potential, the generated videos may present artifacts, including structural implausibility, temporal inconsistency, and a lack of motion, often resulting in near-static video. In this work, we have identified a correlation between the disparity of temporal attention maps across different blocks and the occurrence of temporal inconsistencies. Additionally, we have observed that the energy contained within the temporal attention maps is directly related to the magnitude of motion amplitude in the generated videos. Based on these observations, we present BroadWay, a training-free method to improve the quality of text-to-video generation without introducing additional parameters, augmenting memory or sampling time. Specifically, BroadWay is composed of two principal components: 1) Temporal Self-Guidance improves the structural plausibility and temporal consistency of generated videos by reducing the disparity between the temporal attention maps across various decoder blocks. 2) Fourier-based Motion Enhancement enhances the magnitude and richness of motion by amplifying the energy of the map. Extensive experiments demonstrate that BroadWay significantly improves the quality of text-to-video generation with negligible additional cost.<|reference_end|>
arxiv
@article{bu2024broadway:, title={BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way}, author={Jiazi Bu, Pengyang Ling, Pan Zhang, Tong Wu, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Dahua Lin, Jiaqi Wang}, journal={arXiv preprint arXiv:2410.06241}, year={2024}, archivePrefix={arXiv}, eprint={2410.06241}, primaryClass={cs.CV} }
bu2024broadway:
arxiv-667186
2410.06243
Unsupervised Model Diagnosis
<|reference_start|>Unsupervised Model Diagnosis: Ensuring model explainability and robustness is essential for reliable deployment of deep vision systems. Current methods for evaluating robustness rely on collecting and annotating extensive test sets. While this is common practice, the process is labor-intensive and expensive with no guarantee of sufficient coverage across attributes of interest. Recently, model diagnosis frameworks have emerged leveraging user inputs (e.g., text) to assess the vulnerability of the model. However, such dependence on human can introduce bias and limitation given the domain knowledge of particular users. This paper proposes Unsupervised Model Diagnosis (UMO), that leverages generative models to produce semantic counterfactual explanations without any user guidance. Given a differentiable computer vision model (i.e., the target model), UMO optimizes for the most counterfactual directions in a generative latent space. Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources, such as dictionaries or language models. We validate the framework on multiple vision tasks (e.g., classification, segmentation, keypoint detection). Extensive experiments show that our unsupervised discovery of semantic directions can correctly highlight spurious correlations and visualize the failure mode of target models without any human intervention.<|reference_end|>
arxiv
@article{wang2024unsupervised, title={Unsupervised Model Diagnosis}, author={Yinong Oliver Wang, Eileen Li, Jinqi Luo, Zhaoning Wang, Fernando De la Torre}, journal={arXiv preprint arXiv:2410.06243}, year={2024}, archivePrefix={arXiv}, eprint={2410.06243}, primaryClass={cs.CV cs.AI cs.CL cs.LG} }
wang2024unsupervised
arxiv-667187
2410.06244
Story-Adapter: A Training-free Iterative Framework for Long Story Visualization
<|reference_start|>Story-Adapter: A Training-free Iterative Framework for Long Story Visualization: Story visualization, the task of generating coherent images based on a narrative, has seen significant advancements with the emergence of text-to-image models, particularly diffusion models. However, maintaining semantic consistency, generating high-quality fine-grained interactions, and ensuring computational feasibility remain challenging, especially in long story visualization (i.e., up to 100 frames). In this work, we propose a training-free and computationally efficient framework, termed Story-Adapter, to enhance the generative capability of long stories. Specifically, we propose an iterative paradigm to refine each generated image, leveraging both the text prompt and all generated images from the previous iteration. Central to our framework is a training-free global reference cross-attention module, which aggregates all generated images from the previous iteration to preserve semantic consistency across the entire story, while minimizing computational costs with global embeddings. This iterative process progressively optimizes image generation by repeatedly incorporating text constraints, resulting in more precise and fine-grained interactions. Extensive experiments validate the superiority of Story-Adapter in improving both semantic consistency and generative capability for fine-grained interactions, particularly in long story scenarios. The project page and associated code can be accessed via https://jwmao1.github.io/storyadapter .<|reference_end|>
arxiv
@article{mao2024story-adapter:, title={Story-Adapter: A Training-free Iterative Framework for Long Story Visualization}, author={Jiawei Mao, Xiaoke Huang, Yunfei Xie, Yuanqi Chang, Mude Hui, Bingjie Xu, Yuyin Zhou}, journal={arXiv preprint arXiv:2410.06244}, year={2024}, archivePrefix={arXiv}, eprint={2410.06244}, primaryClass={cs.CV} }
mao2024story-adapter:
arxiv-667188
2410.06245
HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction
<|reference_start|>HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction: Reconstructing 3D scenes from multiple viewpoints is a fundamental task in stereo vision. Recently, advances in generalizable 3D Gaussian Splatting have enabled high-quality novel view synthesis for unseen scenes from sparse input views by feed-forward predicting per-pixel Gaussian parameters without extra optimization. However, existing methods typically generate single-scale 3D Gaussians, which lack representation of both large-scale structure and texture details, resulting in mislocation and artefacts. In this paper, we propose a novel framework, HiSplat, which introduces a hierarchical manner in generalizable 3D Gaussian Splatting to construct hierarchical 3D Gaussians via a coarse-to-fine strategy. Specifically, HiSplat generates large coarse-grained Gaussians to capture large-scale structures, followed by fine-grained Gaussians to enhance delicate texture details. To promote inter-scale interactions, we propose an Error Aware Module for Gaussian compensation and a Modulating Fusion Module for Gaussian repair. Our method achieves joint optimization of hierarchical representations, allowing for novel view synthesis using only two-view reference images. Comprehensive experiments on various datasets demonstrate that HiSplat significantly enhances reconstruction quality and cross-dataset generalization compared to prior single-scale methods. The corresponding ablation study and analysis of different-scale 3D Gaussians reveal the mechanism behind the effectiveness. Project website: https://open3dvlab.github.io/HiSplat/<|reference_end|>
arxiv
@article{tang2024hisplat:, title={HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction}, author={Shengji Tang, Weicai Ye, Peng Ye, Weihao Lin, Yang Zhou, Tao Chen, Wanli Ouyang}, journal={arXiv preprint arXiv:2410.06245}, year={2024}, archivePrefix={arXiv}, eprint={2410.06245}, primaryClass={cs.CV} }
tang2024hisplat:
arxiv-667189
2410.06262
SymDiff: Equivariant Diffusion via Stochastic Symmetrisation
<|reference_start|>SymDiff: Equivariant Diffusion via Stochastic Symmetrisation: We propose SymDiff, a novel method for constructing equivariant diffusion models using the recently introduced framework of stochastic symmetrisation. SymDiff resembles a learned data augmentation that is deployed at sampling time, and is lightweight, computationally efficient, and easy to implement on top of arbitrary off-the-shelf models. Notably, in contrast to previous work, SymDiff typically does not require any neural network components that are intrinsically equivariant, avoiding the need for complex parameterizations and the use of higher-order geometric features. Instead, our method can leverage highly scalable modern architectures as drop-in replacements for these more constrained alternatives. We show that this additional flexibility yields significant empirical benefit on $\mathrm{E}(3)$-equivariant molecular generation. To the best of our knowledge, this is the first application of symmetrisation to generative modelling, suggesting its potential in this domain more generally.<|reference_end|>
arxiv
@article{zhang2024symdiff:, title={SymDiff: Equivariant Diffusion via Stochastic Symmetrisation}, author={Leo Zhang, Kianoosh Ashouritaklimi, Yee Whye Teh, Rob Cornish}, journal={arXiv preprint arXiv:2410.06262}, year={2024}, archivePrefix={arXiv}, eprint={2410.06262}, primaryClass={cs.LG stat.ML} }
zhang2024symdiff:
arxiv-667190
2410.06263
BoxMap: Efficient Structural Mapping and Navigation
<|reference_start|>BoxMap: Efficient Structural Mapping and Navigation: While humans can successfully navigate using abstractions, ignoring details that are irrelevant to the task at hand, most existing robotic applications require the maintenance of a detailed environment representation which consumes a significant amount of sensing, computing, and storage. These issues are particularly important in a resource-constrained setting with limited power budget. Deep learning methods can learn from prior experience to abstract knowledge of unknown environments, and use it to execute tasks (e.g., frontier exploration, object search, or scene understanding) more efficiently. We propose BoxMap, a Detection-Transformer-based architecture that takes advantage of the structure of the sensed partial environment to update a topological graph of the environment as a set of semantic entities (e.g. rooms and doors) and their relations (e.g. connectivity). These predictions from low-level measurements can then be leveraged to achieve high-level goals with lower computational costs than methods based on detailed representations. As an example application, we consider a robot equipped with a 2-D laser scanner tasked with exploring a residential building. Our BoxMap representation scales quadratically with the number of rooms (with a small constant), resulting in significant savings over a full geometric map. Moreover, our high-level topological representation results in 30.9% shorter trajectories in the exploration task with respect to a standard method.<|reference_end|>
arxiv
@article{wang2024boxmap:, title={BoxMap: Efficient Structural Mapping and Navigation}, author={Zili Wang, Christopher Allum, Sean B. Andersson, and Roberto Tron}, journal={arXiv preprint arXiv:2410.06263}, year={2024}, archivePrefix={arXiv}, eprint={2410.06263}, primaryClass={cs.RO} }
wang2024boxmap:
arxiv-667191
2410.06264
Think While You Generate: Discrete Diffusion with Planned Denoising
<|reference_start|>Think While You Generate: Discrete Diffusion with Planned Denoising: Discrete diffusion has achieved state-of-the-art performance, outperforming or approaching autoregressive models on standard benchmarks. In this work, we introduce Discrete Diffusion with Planned Denoising (DDPD), a novel framework that separates the generation process into two models: a planner and a denoiser. At inference time, the planner selects which positions to denoise next by identifying the most corrupted positions in need of denoising, including both initially corrupted and those requiring additional refinement. This plan-and-denoise approach enables more efficient reconstruction during generation by iteratively identifying and denoising corruptions in the optimal order. DDPD outperforms traditional denoiser-only mask diffusion methods, achieving superior results on language modeling benchmarks such as text8, OpenWebText, and token-based generation on ImageNet $256 \times 256$. Notably, in language modeling, DDPD significantly reduces the performance gap between diffusion-based and autoregressive methods in terms of generative perplexity. Code is available at https://github.com/liusulin/DDPD.<|reference_end|>
arxiv
@article{liu2024think, title={Think While You Generate: Discrete Diffusion with Planned Denoising}, author={Sulin Liu, Juno Nam, Andrew Campbell, Hannes St"ark, Yilun Xu, Tommi Jaakkola, Rafael G'omez-Bombarelli}, journal={arXiv preprint arXiv:2410.06264}, year={2024}, archivePrefix={arXiv}, eprint={2410.06264}, primaryClass={cs.LG cs.AI cs.CL cs.CV stat.ML} }
liu2024think
arxiv-667192
2410.06265
SHADE: Deep Density-based Clustering
<|reference_start|>SHADE: Deep Density-based Clustering: Detecting arbitrarily shaped clusters in high-dimensional noisy data is challenging for current clustering methods. We introduce SHADE (Structure-preserving High-dimensional Analysis with Density-based Exploration), the first deep clustering algorithm that incorporates density-connectivity into its loss function. Similar to existing deep clustering algorithms, SHADE supports high-dimensional and large data sets with the expressive power of a deep autoencoder. In contrast to most existing deep clustering methods that rely on a centroid-based clustering objective, SHADE incorporates a novel loss function that captures density-connectivity. SHADE thereby learns a representation that enhances the separation of density-connected clusters. SHADE detects a stable clustering and noise points fully automatically without any user input. It outperforms existing methods in clustering quality, especially on data that contain non-Gaussian clusters, such as video data. Moreover, the embedded space of SHADE is suitable for visualization and interpretation of the clustering results as the individual shapes of the clusters are preserved.<|reference_end|>
arxiv
@article{beer2024shade:, title={SHADE: Deep Density-based Clustering}, author={Anna Beer, Pascal Weber, Lukas Miklautz, Collin Leiber, Walid Durani, Christian B"ohm, Claudia Plant}, journal={arXiv preprint arXiv:2410.06265}, year={2024}, archivePrefix={arXiv}, eprint={2410.06265}, primaryClass={cs.LG} }
beer2024shade:
arxiv-667193
2410.06266
Near Exact Privacy Amplification for Matrix Mechanisms
<|reference_start|>Near Exact Privacy Amplification for Matrix Mechanisms: We study the problem of computing the privacy parameters for DP machine learning when using privacy amplification via random batching and noise correlated across rounds via a correlation matrix $\textbf{C}$ (i.e., the matrix mechanism). Past work on this problem either only applied to banded $\textbf{C}$, or gave loose privacy parameters. In this work, we give a framework for computing near-exact privacy parameters for any lower-triangular, non-negative $\textbf{C}$. Our framework allows us to optimize the correlation matrix $\textbf{C}$ while accounting for amplification, whereas past work could not. Empirically, we show this lets us achieve smaller RMSE on prefix sums than the previous state-of-the-art (SOTA). We also show that we can improve on the SOTA performance on deep learning tasks. Our two main technical tools are (i) using Monte Carlo accounting to bypass composition, which was the main technical challenge for past work, and (ii) a "balls-in-bins" batching scheme that enables easy privacy analysis and is closer to practical random batching than Poisson sampling.<|reference_end|>
arxiv
@article{choquette-choo2024near, title={Near Exact Privacy Amplification for Matrix Mechanisms}, author={Christopher A. Choquette-Choo, Arun Ganesh, Saminul Haque, Thomas Steinke, Abhradeep Thakurta}, journal={arXiv preprint arXiv:2410.06266}, year={2024}, archivePrefix={arXiv}, eprint={2410.06266}, primaryClass={cs.CR} }
choquette-choo2024near
arxiv-667194
2410.06267
Can metacognition predict your success in solving problems? An exploratory case study in programming
<|reference_start|>Can metacognition predict your success in solving problems? An exploratory case study in programming: Metacognition has been recognized as an essential skill for academic success and for performance in solving problems. During learning or problem-solving, metacognitive skills facilitate a range of cognitive and affective processes, leading collectively to improved performance. This study explores the predictive potential of metacognition in the second introductory programming course. A two-dimensional model has been proposed, consisting of metacognitive awareness and metacognitive behavior. To evaluate the predictive capacity of metacognition empirically, an exploratory case study with 194 participants from two institutions was conducted in the second introductory programming course. A latent approach was employed to examine the associations between metacognition and performance in object-oriented programming. Our findings indicate that both metacognitive dimensions have a positive effect on programming. Likewise, the results of the structural equation modeling show that 27% of variance in programming performance is explained by metacognitive behavior. Following the results, metacognition has the potential to be considered as one of the important predictors of performance in introductory programming.<|reference_end|>
arxiv
@article{bubnic2024can, title={Can metacognition predict your success in solving problems? An exploratory case study in programming}, author={Bostjan Bubnic, v{Z}eljko Kovav{c}evi'c, Tomav{z} Kosar}, journal={arXiv preprint arXiv:2410.06267}, year={2024}, doi={10.1145/3699538.3699593}, archivePrefix={arXiv}, eprint={2410.06267}, primaryClass={cs.CY cs.HC} }
bubnic2024can
arxiv-667195
2410.06270
MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More
<|reference_start|>MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More: Mixture-of-Experts large language models (MoE-LLMs) marks a significant step forward of language models, however, they encounter two critical challenges in practice: 1) expert parameters lead to considerable memory consumption and loading latency; and 2) the current activated experts are redundant, as many tokens may only require a single expert. Motivated by these issues, we investigate the MoE-LLMs and make two key observations: a) different experts exhibit varying behaviors on activation reconstruction error, routing scores, and activated frequencies, highlighting their differing importance, and b) not all tokens are equally important -- only a small subset is critical. Building on these insights, we propose MC-MoE, a training-free Mixture-Compressor for MoE-LLMs, which leverages the significance of both experts and tokens to achieve an extreme compression. First, to mitigate storage and loading overheads, we introduce Pre-Loading Mixed-Precision Quantization, which formulates the adaptive bit-width allocation as a Linear Programming problem, where the objective function balances multi-factors reflecting the importance of each expert. Additionally, we develop Online Dynamic Pruning, which identifies important tokens to retain and dynamically select activated experts for other tokens during inference to optimize efficiency while maintaining performance. Our MC-MoE integrates static quantization and dynamic pruning to collaboratively achieve extreme compression for MoE-LLMs with less accuracy loss, ensuring an optimal trade-off between performance and efficiency. Extensive experiments confirm the effectiveness of our approach. For instance, at 2.54 bits, MC-MoE compresses 76.6% of the model, with only a 3.8% average accuracy loss. During dynamic inference, we further reduce activated parameters by 15%, with a performance drop of less than 0.6%.<|reference_end|>
arxiv
@article{huang2024mc-moe:, title={MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More}, author={Wei Huang, Yue Liao, Jianhui Liu, Ruifei He, Haoru Tan, Shiming Zhang, Hongsheng Li, Si Liu, Xiaojuan Qi}, journal={arXiv preprint arXiv:2410.06270}, year={2024}, archivePrefix={arXiv}, eprint={2410.06270}, primaryClass={cs.LG cs.CL} }
huang2024mc-moe:
arxiv-667196
2410.06271
Probing the Robustness of Theory of Mind in Large Language Models
<|reference_start|>Probing the Robustness of Theory of Mind in Large Language Models: With the success of ChatGPT and other similarly sized SotA LLMs, claims of emergent human like social reasoning capabilities, especially Theory of Mind (ToM), in these models have appeared in the scientific literature. On the one hand those ToM-capabilities have been successfully tested using tasks styled similar to those used in psychology (Kosinski, 2023). On the other hand, follow up studies showed that those capabilities vanished when the tasks were slightly altered (Ullman, 2023). In this work we introduce a novel dataset of 68 tasks for probing ToM in LLMs, including potentially challenging variations which are assigned to 10 complexity classes. This way it is providing novel insights into the challenges LLMs face with those task variations. We evaluate the ToM performance of four SotA open source LLMs on our dataset and the dataset introduced by (Kosinski, 2023). The overall low goal accuracy across all evaluated models indicates only a limited degree of ToM capabilities. The LLMs' performance on simple complexity class tasks from both datasets are similar. Whereas we find a consistent tendency in all tested LLMs to perform poorly on tasks that require the realization that an agent has knowledge of automatic state changes in its environment, even when those are spelled out to the model. For task complications that change the relationship between objects by replacing prepositions, we notice a performance drop in all models, with the strongest impact on the mixture-of-experts model. With our dataset of tasks grouped by complexity we offer directions for further research on how to stabilize and advance ToM capabilities in LLM.<|reference_end|>
arxiv
@article{nickel2024probing, title={Probing the Robustness of Theory of Mind in Large Language Models}, author={Christian Nickel, Laura Schrewe, Lucie Flek}, journal={arXiv preprint arXiv:2410.06271}, year={2024}, archivePrefix={arXiv}, eprint={2410.06271}, primaryClass={cs.CL cs.AI} }
nickel2024probing
arxiv-667197
2410.06272
The Mystery of Compositional Generalization in Graph-based Generative Commonsense Reasoning
<|reference_start|>The Mystery of Compositional Generalization in Graph-based Generative Commonsense Reasoning: While LLMs have emerged as performant architectures for reasoning tasks, their compositional generalization capabilities have been questioned. In this work, we introduce a Compositional Generalization Challenge for Graph-based Commonsense Reasoning (CGGC) that goes beyond previous evaluations that are based on sequences or tree structures - and instead involves a reasoning graph: It requires models to generate a natural sentence based on given concepts and a corresponding reasoning graph, where the presented graph involves a previously unseen combination of relation types. To master this challenge, models need to learn how to reason over relation tupels within the graph, and how to compose them when conceptualizing a verbalization. We evaluate seven well-known LLMs using in-context learning and find that performant LLMs still struggle in compositional generalization. We investigate potential causes of this gap by analyzing the structures of reasoning graphs, and find that different structures present varying levels of difficulty for compositional generalization. Arranging the order of demonstrations according to the structures' difficulty shows that organizing samples in an easy-to-hard schema enhances the compositional generalization ability of LLMs.<|reference_end|>
arxiv
@article{fu2024the, title={The Mystery of Compositional Generalization in Graph-based Generative Commonsense Reasoning}, author={Xiyan Fu, Anette Frank}, journal={arXiv preprint arXiv:2410.06272}, year={2024}, archivePrefix={arXiv}, eprint={2410.06272}, primaryClass={cs.CL} }
fu2024the
arxiv-667198
2410.06273
PREDICT: Preference Reasoning by Evaluating Decomposed preferences Inferred from Candidate Trajectories
<|reference_start|>PREDICT: Preference Reasoning by Evaluating Decomposed preferences Inferred from Candidate Trajectories: Accommodating human preferences is essential for creating AI agents that deliver personalized and effective interactions. Recent work has shown the potential for LLMs to infer preferences from user interactions, but they often produce broad and generic preferences, failing to capture the unique and individualized nature of human preferences. This paper introduces PREDICT, a method designed to enhance the precision and adaptability of inferring preferences. PREDICT incorporates three key elements: (1) iterative refinement of inferred preferences, (2) decomposition of preferences into constituent components, and (3) validation of preferences across multiple trajectories. We evaluate PREDICT on two distinct environments: a gridworld setting and a new text-domain environment (PLUME). PREDICT more accurately infers nuanced human preferences improving over existing baselines by 66.2\% (gridworld environment) and 41.0\% (PLUME).<|reference_end|>
arxiv
@article{aroca-ouellette2024predict:, title={PREDICT: Preference Reasoning by Evaluating Decomposed preferences Inferred from Candidate Trajectories}, author={Stephane Aroca-Ouellette, Natalie Mackraz, Barry-John Theobald, Katherine Metcalf}, journal={arXiv preprint arXiv:2410.06273}, year={2024}, archivePrefix={arXiv}, eprint={2410.06273}, primaryClass={cs.AI cs.HC} }
aroca-ouellette2024predict:
arxiv-667199
2410.06276
An iterative method for solving elliptic BVP in one-dimension
<|reference_start|>An iterative method for solving elliptic BVP in one-dimension: This paper presents a decomposition method for solving elliptic boundary value problems in one-dimension. The method is an improvement to an existing technique for approximating elliptic systems. It is demonstrated to be computationally superior to the original formulation as less computations are required to obtain an approximation of the same accuracy. Convergence of the method is justified and supported by some theoretical results. We show that for a sufficiently smooth forcing data, the method always converge for a relatively small truncation order. The method is tested using some problems with exact solutions.<|reference_end|>
arxiv
@article{zelaya2024an, title={An iterative method for solving elliptic BVP in one-dimension}, author={Christian O. Bernal Zelaya, Prosper Torsu}, journal={arXiv preprint arXiv:2410.06276}, year={2024}, archivePrefix={arXiv}, eprint={2410.06276}, primaryClass={math.AP cs.NA math.NA} }
zelaya2024an
arxiv-667200
2410.06277
Is Pontryagin's Maximum Principle all you need? Solving optimal control problems with PMP-inspired neural networks
<|reference_start|>Is Pontryagin's Maximum Principle all you need? Solving optimal control problems with PMP-inspired neural networks: Calculus of Variations is the mathematics of functional optimization, i.e., when the solutions are functions over a time interval. This is particularly important when the time interval is unknown like in minimum-time control problems, so that forward in time solutions are not possible. Calculus of Variations offers a robust framework for learning optimal control and inference. How can this framework be leveraged to design neural networks to solve challenges in control and inference? We propose the Pontryagin's Maximum Principle Neural Network (PMP-net) that is tailored to estimate control and inference solutions, in accordance with the necessary conditions outlined by Pontryagin's Maximum Principle. We assess PMP-net on two classic optimal control and inference problems: optimal linear filtering and minimum-time control. Our findings indicate that PMP-net can be effectively trained in an unsupervised manner to solve these problems without the need for ground-truth data, successfully deriving the classical "Kalman filter" and "bang-bang" control solution. This establishes a new approach for addressing general, possibly yet unsolved, optimal control problems.<|reference_end|>
arxiv
@article{kamtue2024is, title={Is Pontryagin's Maximum Principle all you need? Solving optimal control problems with PMP-inspired neural networks}, author={Kawisorn Kamtue, Jose M.F. Moura, Orathai Sangpetch}, journal={arXiv preprint arXiv:2410.06277}, year={2024}, archivePrefix={arXiv}, eprint={2410.06277}, primaryClass={cs.LG cs.AI math.OC} }
kamtue2024is