corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-665101 | 2410.02566 | Deep Learning-Based Prediction of Suspension Dynamics Performance in Multi-Axle Vehicles | <|reference_start|>Deep Learning-Based Prediction of Suspension Dynamics Performance in Multi-Axle Vehicles: This paper presents a deep learning-based framework for predicting the dynamic performance of suspension systems in multi-axle vehicles, emphasizing the integration of machine learning with traditional vehicle dynamics modeling. A Multi-Task Deep Belief Network Deep Neural Network (MTL-DBN-DNN) was developed to capture the relationships between key vehicle parameters and suspension performance metrics. The model was trained on data generated from numerical simulations and demonstrated superior prediction accuracy compared to conventional DNN models. A comprehensive sensitivity analysis was conducted to assess the impact of various vehicle and suspension parameters on dynamic suspension performance. Additionally, the Suspension Dynamic Performance Index (SDPI) was introduced as a holistic measure to quantify overall suspension performance, accounting for the combined effects of multiple parameters. The findings highlight the effectiveness of multitask learning in improving predictive models for complex vehicle systems.<|reference_end|> | arxiv | @article{lin2024deep,
title={Deep Learning-Based Prediction of Suspension Dynamics Performance in
Multi-Axle Vehicles},
author={Kai Chun Lin and Bo-Yi Lin},
journal={arXiv preprint arXiv:2410.02566},
year={2024},
archivePrefix={arXiv},
eprint={2410.02566},
primaryClass={cs.LG cs.CE cs.NA math.NA}
} | lin2024deep |
arxiv-665102 | 2410.02571 | SuperGS: Super-Resolution 3D Gaussian Splatting via Latent Feature Field and Gradient-guided Splitting | <|reference_start|>SuperGS: Super-Resolution 3D Gaussian Splatting via Latent Feature Field and Gradient-guided Splitting: Recently, 3D Gaussian Splatting (3DGS) has exceled in novel view synthesis with its real-time rendering capabilities and superior quality. However, it faces challenges for high-resolution novel view synthesis (HRNVS) due to the coarse nature of primitives derived from low-resolution input views. To address this issue, we propose Super-Resolution 3DGS (SuperGS), which is an expansion of 3DGS designed with a two-stage coarse-to-fine training framework, utilizing pretrained low-resolution scene representation as an initialization for super-resolution optimization. Moreover, we introduce Multi-resolution Feature Gaussian Splatting (MFGS) to incorporates a latent feature field for flexible feature sampling and Gradient-guided Selective Splitting (GSS) for effective Gaussian upsampling. By integrating these strategies within the coarse-to-fine framework ensure both high fidelity and memory efficiency. Extensive experiments demonstrate that SuperGS surpasses state-of-the-art HRNVS methods on challenging real-world datasets using only low-resolution inputs.<|reference_end|> | arxiv | @article{xie2024supergs:,
title={SuperGS: Super-Resolution 3D Gaussian Splatting via Latent Feature Field
and Gradient-guided Splitting},
author={Shiyun Xie, Zhiru Wang, Yinghao Zhu, Chengwei Pan},
journal={arXiv preprint arXiv:2410.02571},
year={2024},
archivePrefix={arXiv},
eprint={2410.02571},
primaryClass={cs.CV}
} | xie2024supergs: |
arxiv-665103 | 2410.02572 | Combining Pre- and Post-Demosaicking Noise Removal for RAW Video | <|reference_start|>Combining Pre- and Post-Demosaicking Noise Removal for RAW Video: Denoising is one of the fundamental steps of the processing pipeline that converts data captured by a camera sensor into a display-ready image or video. It is generally performed early in the pipeline, usually before demosaicking, although studies swapping their order or even conducting them jointly have been proposed. With the advent of deep learning, the quality of denoising algorithms has steadily increased. Even so, modern neural networks still have a hard time adapting to new noise levels and scenes, which is indispensable for real-world applications. With those in mind, we propose a self-similarity-based denoising scheme that weights both a pre- and a post-demosaicking denoiser for Bayer-patterned CFA video data. We show that a balance between the two leads to better image quality, and we empirically find that higher noise levels benefit from a higher influence pre-demosaicking. We also integrate temporal trajectory prefiltering steps before each denoiser, which further improve texture reconstruction. The proposed method only requires an estimation of the noise model at the sensor, accurately adapts to any noise level, and is competitive with the state of the art, making it suitable for real-world videography.<|reference_end|> | arxiv | @article{sánchez-beeckman2024combining,
title={Combining Pre- and Post-Demosaicking Noise Removal for RAW Video},
author={Marco S'anchez-Beeckman (1), Antoni Buades (1), Nicola Brandonisio
(2) and Bilel Kanoun (2) ((1) IAC3 & Departament de Matem`atiques i
Inform`atica, Universitat de les Illes Balears, (2) Huawei Technologies
France)},
journal={arXiv preprint arXiv:2410.02572},
year={2024},
archivePrefix={arXiv},
eprint={2410.02572},
primaryClass={eess.IV cs.CV}
} | sánchez-beeckman2024combining |
arxiv-665104 | 2410.02575 | Assessing the Viability of Synthetic Physical Copy Detection Patterns on Different Imaging Systems | <|reference_start|>Assessing the Viability of Synthetic Physical Copy Detection Patterns on Different Imaging Systems: This paper explores the potential of synthetic physical Copy Detection Patterns (CDP) to improve the robustness of anti-counterfeiting systems. By leveraging synthetic physical CDP, we aim at enhancing security and cost-effectiveness across various real-world applications. Our research demonstrates that synthetic CDP offer substantial improvements in authentication accuracy compared to one based on traditional digital templates. We conducted extensive tests using both a scanner and a diverse range of mobile phones, validating our approach through ROC analysis. The results indicate that synthetic CDP can reliably differentiate between original and fake samples, making this approach a viable solution for real-world applications, though requires an additional research to make this technology scalable across a variety of imaging devices.<|reference_end|> | arxiv | @article{chaban2024assessing,
title={Assessing the Viability of Synthetic Physical Copy Detection Patterns on
Different Imaging Systems},
author={Roman Chaban, Brian Pulfer and Slava Voloshynovskiy},
journal={arXiv preprint arXiv:2410.02575},
year={2024},
archivePrefix={arXiv},
eprint={2410.02575},
primaryClass={cs.CR cs.IT math.IT}
} | chaban2024assessing |
arxiv-665105 | 2410.02579 | Deep Regression 2D-3D Ultrasound Registration for Liver Motion Correction in Focal Tumor Thermal Ablation | <|reference_start|>Deep Regression 2D-3D Ultrasound Registration for Liver Motion Correction in Focal Tumor Thermal Ablation: Liver tumor ablation procedures require accurate placement of the needle applicator at the tumor centroid. The lower-cost and real-time nature of ultrasound (US) has advantages over computed tomography (CT) for applicator guidance, however, in some patients, liver tumors may be occult on US and tumor mimics can make lesion identification challenging. Image registration techniques can aid in interpreting anatomical details and identifying tumors, but their clinical application has been hindered by the tradeoff between alignment accuracy and runtime performance, particularly when compensating for liver motion due to patient breathing or movement. Therefore, we propose a 2D-3D US registration approach to enable intra-procedural alignment that mitigates errors caused by liver motion. Specifically, our approach can correlate imbalanced 2D and 3D US image features and use continuous 6D rotation representations to enhance the model's training stability. The dataset was divided into 2388, 196 and 193 image pairs for training, validation and testing, respectively. Our approach achieved a mean Euclidean distance error of 2.28 mm $\pm$ 1.81 mm and a mean geodesic angular error of 2.99$^{\circ}$ $\pm$ 1.95$^{\circ}$, with a runtime of 0.22 seconds per 2D-3D US image pair. These results demonstrate that our approach can achieve accurate alignment and clinically acceptable runtime, indicating potential for clinical translation.<|reference_end|> | arxiv | @article{xing2024deep,
title={Deep Regression 2D-3D Ultrasound Registration for Liver Motion
Correction in Focal Tumor Thermal Ablation},
author={Shuwei Xing, Derek W. Cool, David Tessier, Elvis C.S. Chen, Terry M.
Peters and Aaron Fenster},
journal={arXiv preprint arXiv:2410.02579},
year={2024},
archivePrefix={arXiv},
eprint={2410.02579},
primaryClass={eess.IV cs.AI}
} | xing2024deep |
arxiv-665106 | 2410.02581 | Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance | <|reference_start|>Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance: Multi-Agent Reinforcement Learning (MARL) struggles with sample inefficiency and poor generalization [1]. These challenges are partially due to a lack of structure or inductive bias in the neural networks typically used in learning the policy. One such form of structure that is commonly observed in multi-agent scenarios is symmetry. The field of Geometric Deep Learning has developed Equivariant Graph Neural Networks (EGNN) that are equivariant (or symmetric) to rotations, translations, and reflections of nodes. Incorporating equivariance has been shown to improve learning efficiency and decrease error [ 2 ]. In this paper, we demonstrate that EGNNs improve the sample efficiency and generalization in MARL. However, we also show that a naive application of EGNNs to MARL results in poor early exploration due to a bias in the EGNN structure. To mitigate this bias, we present Exploration-enhanced Equivariant Graph Neural Networks or E2GN2. We compare E2GN2 to other common function approximators using common MARL benchmarks MPE and SMACv2. E2GN2 demonstrates a significant improvement in sample efficiency, greater final reward convergence, and a 2x-5x gain in over standard GNNs in our generalization tests. These results pave the way for more reliable and effective solutions in complex multi-agent systems.<|reference_end|> | arxiv | @article{mcclellan2024boosting,
title={Boosting Sample Efficiency and Generalization in Multi-agent
Reinforcement Learning via Equivariance},
author={Joshua McClellan, Naveed Haghani, John Winder, Furong Huang, Pratap
Tokekar},
journal={arXiv preprint arXiv:2410.02581},
year={2024},
archivePrefix={arXiv},
eprint={2410.02581},
primaryClass={cs.LG cs.AI}
} | mcclellan2024boosting |
arxiv-665107 | 2410.02583 | Sample-Optimal Quantum State Tomography for Structured Quantum States in One Dimension | <|reference_start|>Sample-Optimal Quantum State Tomography for Structured Quantum States in One Dimension: Quantum state tomography (QST) remains the gold standard for benchmarking and verifying quantum devices. A recent study has proved that, with Haar random projective measurements, only a $O(n^3)$ number of state copies is required to guarantee bounded recovery error of an matrix product operator (MPO) state of qubits $n$. While this result provides a formal evidence that quantum states with an efficient classical representation can be reconstructed with an efficient number of state copies, the number of state copies required is still significantly larger than the number of independent parameters in the classical representation. In this paper, we attempt to narrow this gap and study whether the number of state copies can saturate the information theoretic bound (i.e., $O(n)$, the number of parameters in the MPOs) using physical quantum measurements. We answer this question affirmatively by using a class of Informationally Complete Positive Operator-Valued Measures (IC-POVMs), including symmetric IC-POVMs (SIC-POVMs) and spherical $t$-designs. For SIC-POVMs and (approximate) spherical 2-designs, we show that the number of state copies to guarantee bounded recovery error of an MPO state with a constrained least-squares estimator depends on the probability distribution of the MPO under the POVM but scales only linearly with $n$ when the distribution is approximately uniform. For spherical $t$-designs with $t\ge3$, we prove that only a number of state copies proportional to the number of independent parameters in the MPO is needed for a guaranteed recovery of any state represented by an MPO. Moreover, we propose a projected gradient descent (PGD) algorithm to solve the constrained least-squares problem and show that it can efficiently find an estimate with bounded recovery error when appropriately initialized.<|reference_end|> | arxiv | @article{qin2024sample-optimal,
title={Sample-Optimal Quantum State Tomography for Structured Quantum States in
One Dimension},
author={Zhen Qin, Casey Jameson, Alireza Goldar, Michael B. Wakin, Zhexuan
Gong and Zhihui Zhu},
journal={arXiv preprint arXiv:2410.02583},
year={2024},
archivePrefix={arXiv},
eprint={2410.02583},
primaryClass={quant-ph cs.IT eess.SP math.IT math.OC}
} | qin2024sample-optimal |
arxiv-665108 | 2410.02584 | Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM Interactions | <|reference_start|>Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM Interactions: As Large Language Models (LLMs) continue to evolve, they are increasingly being employed in numerous studies to simulate societies and execute diverse social tasks. However, LLMs are susceptible to societal biases due to their exposure to human-generated data. Given that LLMs are being used to gain insights into various societal aspects, it is essential to mitigate these biases. To that end, our study investigates the presence of implicit gender biases in multi-agent LLM interactions and proposes two strategies to mitigate these biases. We begin by creating a dataset of scenarios where implicit gender biases might arise, and subsequently develop a metric to assess the presence of biases. Our empirical analysis reveals that LLMs generate outputs characterized by strong implicit bias associations (>= 50\% of the time). Furthermore, these biases tend to escalate following multi-agent interactions. To mitigate them, we propose two strategies: self-reflection with in-context examples (ICE); and supervised fine-tuning. Our research demonstrates that both methods effectively mitigate implicit biases, with the ensemble of fine-tuning and self-reflection proving to be the most successful.<|reference_end|> | arxiv | @article{borah2024towards,
title={Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM
Interactions},
author={Angana Borah, Rada Mihalcea},
journal={arXiv preprint arXiv:2410.02584},
year={2024},
archivePrefix={arXiv},
eprint={2410.02584},
primaryClass={cs.CL cs.CY}
} | borah2024towards |
arxiv-665109 | 2410.02587 | An Improved Variational Method for Image Denoising | <|reference_start|>An Improved Variational Method for Image Denoising: The total variation (TV) method is an image denoising technique that aims to reduce noise by minimizing the total variation of the image, which measures the variation in pixel intensities. The TV method has been widely applied in image processing and computer vision for its ability to preserve edges and enhance image quality. In this paper, we propose an improved TV model for image denoising and the associated numerical algorithm to carry out the procedure, which is particularly effective in removing several types of noises and their combinations. Our improved model admits a unique solution and the associated numerical algorithm guarantees the convergence. Numerical experiments are demonstrated to show improved effectiveness and denoising quality compared to other TV models. Such encouraging results further enhance the utility of the TV method in image processing.<|reference_end|> | arxiv | @article{huang2024an,
title={An Improved Variational Method for Image Denoising},
author={Jing-En Huang, Jia-Wei Liao, Ku-Te Lin, Yu-Ju Tsai and Mei-Heng Yueh},
journal={arXiv preprint arXiv:2410.02587},
year={2024},
archivePrefix={arXiv},
eprint={2410.02587},
primaryClass={cs.CV cs.NA math.NA}
} | huang2024an |
arxiv-665110 | 2410.02589 | Expected Maximin Fairness in Max-Cut and other Combinatorial Optimization Problems | <|reference_start|>Expected Maximin Fairness in Max-Cut and other Combinatorial Optimization Problems: Maximin fairness is the ideal that the worst-off group (or individual) should be treated as well as possible. Literature on maximin fairness in various decision-making settings has grown in recent years, but theoretical results are sparse. In this paper, we explore the challenges inherent to maximin fairness in combinatorial optimization. We begin by showing that (1) optimal maximin-fair solutions are bounded by non-maximin-fair optimal solutions, and (2) stochastic maximin-fair solutions exceed their deterministic counterparts in expectation for a broad class of combinatorial optimization problems. In the remainder of the paper, we use the special case of Max-Cut to demonstrate challenges in defining and implementing maximin fairness.<|reference_end|> | arxiv | @article{salem2024expected,
title={Expected Maximin Fairness in Max-Cut and other Combinatorial
Optimization Problems},
author={Jad Salem, Reuben Tate, Stephan Eidenbenz},
journal={arXiv preprint arXiv:2410.02589},
year={2024},
number={LA-UR-24-30325},
archivePrefix={arXiv},
eprint={2410.02589},
primaryClass={cs.DS cs.CY math.OC}
} | salem2024expected |
arxiv-665111 | 2410.02590 | Generalization emerges from local optimization in a self-organized learning network | <|reference_start|>Generalization emerges from local optimization in a self-organized learning network: We design and analyze a new paradigm for building supervised learning networks, driven only by local optimization rules without relying on a global error function. Traditional neural networks with a fixed topology are made up of identical nodes and derive their expressiveness from an appropriate adjustment of connection weights. In contrast, our network stores new knowledge in the nodes accurately and instantaneously, in the form of a lookup table. Only then is some of this information structured and incorporated into the network geometry. The training error is initially zero by construction and remains so throughout the network topology transformation phase. The latter involves a small number of local topological transformations, such as splitting or merging of nodes and adding binary connections between them. The choice of operations to be carried out is only driven by optimization of expressivity at the local scale. What we are primarily looking for in a learning network is its ability to generalize, i.e. its capacity to correctly answer questions for which it has never learned the answers. We show on numerous examples of classification tasks that the networks generated by our algorithm systematically reach such a state of perfect generalization when the number of learned examples becomes sufficiently large. We report on the dynamics of the change of state and show that it is abrupt and has the distinctive characteristics of a first order phase transition, a phenomenon already observed for traditional learning networks and known as grokking. In addition to proposing a non-potential approach for the construction of learning networks, our algorithm makes it possible to rethink the grokking transition in a new light, under which acquisition of training data and topological structuring of data are completely decoupled phenomena.<|reference_end|> | arxiv | @article{barland2024generalization,
title={Generalization emerges from local optimization in a self-organized
learning network},
author={S. Barland, L. Gil},
journal={arXiv preprint arXiv:2410.02590},
year={2024},
archivePrefix={arXiv},
eprint={2410.02590},
primaryClass={nlin.AO cond-mat.dis-nn cs.LG}
} | barland2024generalization |
arxiv-665112 | 2410.02592 | IC3M: In-Car Multimodal Multi-object Monitoring for Abnormal Status of Both Driver and Passengers | <|reference_start|>IC3M: In-Car Multimodal Multi-object Monitoring for Abnormal Status of Both Driver and Passengers: Recently, in-car monitoring has emerged as a promising technology for detecting early-stage abnormal status of the driver and providing timely alerts to prevent traffic accidents. Although training models with multimodal data enhances the reliability of abnormal status detection, the scarcity of labeled data and the imbalance of class distribution impede the extraction of critical abnormal state features, significantly deteriorating training performance. Furthermore, missing modalities due to environment and hardware limitations further exacerbate the challenge of abnormal status identification. More importantly, monitoring abnormal health conditions of passengers, particularly in elderly care, is of paramount importance but remains underexplored. To address these challenges, we introduce our IC3M, an efficient camera-rotation-based multimodal framework for monitoring both driver and passengers in a car. Our IC3M comprises two key modules: an adaptive threshold pseudo-labeling strategy and a missing modality reconstruction. The former customizes pseudo-labeling thresholds for different classes based on the class distribution, generating class-balanced pseudo labels to guide model training effectively, while the latter leverages crossmodality relationships learned from limited labels to accurately recover missing modalities by distribution transferring from available modalities. Extensive experimental results demonstrate that IC3M outperforms state-of-the-art benchmarks in accuracy, precision, and recall while exhibiting superior robustness under limited labeled data and severe missing modality.<|reference_end|> | arxiv | @article{fang2024ic3m:,
title={IC3M: In-Car Multimodal Multi-object Monitoring for Abnormal Status of
Both Driver and Passengers},
author={Zihan Fang, Zheng Lin, Senkang Hu, Hangcheng Cao, Yiqin Deng, Xianhao
Chen, Yuguang Fang},
journal={arXiv preprint arXiv:2410.02592},
year={2024},
archivePrefix={arXiv},
eprint={2410.02592},
primaryClass={cs.CV cs.AI cs.LG cs.SY eess.SY}
} | fang2024ic3m: |
arxiv-665113 | 2410.02595 | Extremum Seeking Controlled Wiggling for Tactile Insertion | <|reference_start|>Extremum Seeking Controlled Wiggling for Tactile Insertion: When humans perform insertion tasks such as inserting a cup into a cupboard, routing a cable, or key insertion, they wiggle the object and observe the process through tactile and proprioceptive feedback. While recent advances in tactile sensors have resulted in tactile-based approaches, there has not been a generalized formulation based on wiggling similar to human behavior. Thus, we propose an extremum-seeking control law that can insert four keys into four types of locks without control parameter tuning despite significant variation in lock type. The resulting model-free formulation wiggles the end effector pose to maximize insertion depth while minimizing strain as measured by a GelSight Mini tactile sensor that grasps a key. The algorithm achieves a 71\% success rate over 120 randomly initialized trials with uncertainty in both translation and orientation. Over 240 deterministically initialized trials, where only one translation or rotation parameter is perturbed, 84\% of trials succeeded. Given tactile feedback at 13 Hz, the mean insertion time for these groups of trials are 262 and 147 seconds respectively.<|reference_end|> | arxiv | @article{burner2024extremum,
title={Extremum Seeking Controlled Wiggling for Tactile Insertion},
author={Levi Burner, Pavan Mantripragada, Gabriele M. Caddeo, Lorenzo Natale,
Cornelia Ferm"uller, Yiannis Aloimonos},
journal={arXiv preprint arXiv:2410.02595},
year={2024},
archivePrefix={arXiv},
eprint={2410.02595},
primaryClass={cs.RO}
} | burner2024extremum |
arxiv-665114 | 2410.02596 | Beyond Squared Error: Exploring Loss Design for Enhanced Training of Generative Flow Networks | <|reference_start|>Beyond Squared Error: Exploring Loss Design for Enhanced Training of Generative Flow Networks: Generative Flow Networks (GFlowNets) are a novel class of generative models designed to sample from unnormalized distributions and have found applications in various important tasks, attracting great research interest in their training algorithms. In general, GFlowNets are trained by fitting the forward flow to the backward flow on sampled training objects. Prior work focused on the choice of training objects, parameterizations, sampling and resampling strategies, and backward policies, aiming to enhance credit assignment, exploration, or exploitation of the training process. However, the choice of regression loss, which can highly influence the exploration and exploitation behavior of the under-training policy, has been overlooked. Due to the lack of theoretical understanding for choosing an appropriate regression loss, most existing algorithms train the flow network by minimizing the squared error of the forward and backward flows in log-space, i.e., using the quadratic regression loss. In this work, we rigorously prove that distinct regression losses correspond to specific divergence measures, enabling us to design and analyze regression losses according to the desired properties of the corresponding divergence measures. Specifically, we examine two key properties: zero-forcing and zero-avoiding, where the former promotes exploitation and higher rewards, and the latter encourages exploration and enhances diversity. Based on our theoretical framework, we propose three novel regression losses, namely, Shifted-Cosh, Linex(1/2), and Linex(1). We evaluate them across three benchmarks: hyper-grid, bit-sequence generation, and molecule generation. Our proposed losses are compatible with most existing training algorithms, and significantly improve the performances of the algorithms concerning convergence speed, sample diversity, and robustness.<|reference_end|> | arxiv | @article{hu2024beyond,
title={Beyond Squared Error: Exploring Loss Design for Enhanced Training of
Generative Flow Networks},
author={Rui Hu, Yifan Zhang, Zhuoran Li, Longbo Huang},
journal={arXiv preprint arXiv:2410.02596},
year={2024},
archivePrefix={arXiv},
eprint={2410.02596},
primaryClass={cs.LG cs.AI}
} | hu2024beyond |
arxiv-665115 | 2410.02597 | Three-in-One: Fast and Accurate Transducer for Hybrid-Autoregressive ASR | <|reference_start|>Three-in-One: Fast and Accurate Transducer for Hybrid-Autoregressive ASR: We present \textbf{H}ybrid-\textbf{A}utoregressive \textbf{IN}ference Tr\textbf{AN}sducers (HAINAN), a novel architecture for speech recognition that extends the Token-and-Duration Transducer (TDT) model. Trained with randomly masked predictor network outputs, HAINAN supports both autoregressive inference with all network components and non-autoregressive inference without the predictor. Additionally, we propose a novel semi-autoregressive inference paradigm that first generates an initial hypothesis using non-autoregressive inference, followed by refinement steps where each token prediction is regenerated using parallelized autoregression on the initial hypothesis. Experiments on multiple datasets across different languages demonstrate that HAINAN achieves efficiency parity with CTC in non-autoregressive mode and with TDT in autoregressive mode. In terms of accuracy, autoregressive HAINAN outperforms TDT and RNN-T, while non-autoregressive HAINAN significantly outperforms CTC. Semi-autoregressive inference further enhances the model's accuracy with minimal computational overhead, and even outperforms TDT results in some cases. These results highlight HAINAN's flexibility in balancing accuracy and speed, positioning it as a strong candidate for real-world speech recognition applications.<|reference_end|> | arxiv | @article{xu2024three-in-one:,
title={Three-in-One: Fast and Accurate Transducer for Hybrid-Autoregressive ASR},
author={Hainan Xu, Travis M. Bartley, Vladimir Bataev, Boris Ginsburg},
journal={arXiv preprint arXiv:2410.02597},
year={2024},
archivePrefix={arXiv},
eprint={2410.02597},
primaryClass={cs.LG}
} | xu2024three-in-one: |
arxiv-665116 | 2410.02598 | High-Efficiency Neural Video Compression via Hierarchical Predictive Learning | <|reference_start|>High-Efficiency Neural Video Compression via Hierarchical Predictive Learning: The enhanced Deep Hierarchical Video Compression-DHVC 2.0-has been introduced. This single-model neural video codec operates across a broad range of bitrates, delivering not only superior compression performance to representative methods but also impressive complexity efficiency, enabling real-time processing with a significantly smaller memory footprint on standard GPUs. These remarkable advancements stem from the use of hierarchical predictive coding. Each video frame is uniformly transformed into multiscale representations through hierarchical variational autoencoders. For a specific scale's feature representation of a frame, its corresponding latent residual variables are generated by referencing lower-scale spatial features from the same frame and then conditionally entropy-encoded using a probabilistic model whose parameters are predicted using same-scale temporal reference from previous frames and lower-scale spatial reference of the current frame. This feature-space processing operates from the lowest to the highest scale of each frame, completely eliminating the need for the complexity-intensive motion estimation and compensation techniques that have been standard in video codecs for decades. The hierarchical approach facilitates parallel processing, accelerating both encoding and decoding, and supports transmission-friendly progressive decoding, making it particularly advantageous for networked video applications in the presence of packet loss. Source codes will be made available.<|reference_end|> | arxiv | @article{lu2024high-efficiency,
title={High-Efficiency Neural Video Compression via Hierarchical Predictive
Learning},
author={Ming Lu, Zhihao Duan, Wuyang Cong, Dandan Ding, Fengqing Zhu, and Zhan
Ma},
journal={arXiv preprint arXiv:2410.02598},
year={2024},
archivePrefix={arXiv},
eprint={2410.02598},
primaryClass={eess.IV cs.CV}
} | lu2024high-efficiency |
arxiv-665117 | 2410.02599 | Disaggregated Memory with SmartNIC Offloading: a Case Study on Graph Processing | <|reference_start|>Disaggregated Memory with SmartNIC Offloading: a Case Study on Graph Processing: Disaggregated memory breaks the boundary of monolithic servers to enable memory provisioning on demand. Using network-attached memory to provide memory expansion for memory-intensive applications on compute nodes can improve the overall memory utilization on a cluster and reduce the total cost of ownership. However, current software solutions for leveraging network-attached memory must consume resources on the compute node for memory management tasks. Emerging off-path smartNICs provide general-purpose programmability at low-cost low-power cores. This work provides a general architecture design that enables network-attached memory and offloading tasks onto off-path programmable SmartNIC. We provide a prototype implementation called SODA on Nvidia BlueField DPU. SODA adapts communication paths and data transfer alternatives, pipelines data movement stages, and enables customizable data caching and prefetching optimizations. We evaluate SODA in five representative graph applications on real-world graphs. Our results show that SODA can achieve up to 7.9x speedup compared to node-local SSD and reduce network traffic by 42% compared to disaggregated memory without SmartNIC offloading at similar or better performance.<|reference_end|> | arxiv | @article{wahlgren2024disaggregated,
title={Disaggregated Memory with SmartNIC Offloading: a Case Study on Graph
Processing},
author={Jacob Wahlgren, Gabin Schieffer, Maya Gokhale, Roger Pearce, Ivy Peng},
journal={arXiv preprint arXiv:2410.02599},
year={2024},
archivePrefix={arXiv},
eprint={2410.02599},
primaryClass={cs.DC}
} | wahlgren2024disaggregated |
arxiv-665118 | 2410.02601 | Diffusion & Adversarial Schr\"odinger Bridges via Iterative Proportional Markovian Fitting | <|reference_start|>Diffusion & Adversarial Schr\"odinger Bridges via Iterative Proportional Markovian Fitting: The Iterative Markovian Fitting (IMF) procedure based on iterative reciprocal and Markovian projections has recently been proposed as a powerful method for solving the Schr\"odinger Bridge problem. However, it has been observed that for the practical implementation of this procedure, it is crucial to alternate between fitting a forward and backward time diffusion at each iteration. Such implementation is thought to be a practical heuristic, which is required to stabilize training and obtain good results in applications such as unpaired domain translation. In our work, we show that this heuristic closely connects with the pioneer approaches for the Schr\"odinger Bridge based on the Iterative Proportional Fitting (IPF) procedure. Namely, we find that the practical implementation of IMF is, in fact, a combination of IMF and IPF procedures, and we call this combination the Iterative Proportional Markovian Fitting (IPMF) procedure. We show both theoretically and practically that this combined IPMF procedure can converge under more general settings, thus, showing that the IPMF procedure opens a door towards developing a unified framework for solving Schr\"odinger Bridge problems.<|reference_end|> | arxiv | @article{kholkin2024diffusion,
title={Diffusion & Adversarial Schr\"odinger Bridges via Iterative Proportional
Markovian Fitting},
author={Sergei Kholkin, Grigoriy Ksenofontov, David Li, Nikita Kornilov,
Nikita Gushchin, Evgeny Burnaev, Alexander Korotin},
journal={arXiv preprint arXiv:2410.02601},
year={2024},
archivePrefix={arXiv},
eprint={2410.02601},
primaryClass={cs.LG}
} | kholkin2024diffusion |
arxiv-665119 | 2410.02603 | Agents' Room: Narrative Generation through Multi-step Collaboration | <|reference_start|>Agents' Room: Narrative Generation through Multi-step Collaboration: Writing compelling fiction is a multifaceted process combining elements such as crafting a plot, developing interesting characters, and using evocative language. While large language models (LLMs) show promise for story writing, they currently rely heavily on intricate prompting, which limits their use. We propose Agents' Room, a generation framework inspired by narrative theory, that decomposes narrative writing into subtasks tackled by specialized agents. To illustrate our method, we introduce Tell Me A Story, a high-quality dataset of complex writing prompts and human-written stories, and a novel evaluation framework designed specifically for assessing long narratives. We show that Agents' Room generates stories that are preferred by expert evaluators over those produced by baseline systems by leveraging collaboration and specialization to decompose the complex story writing task into tractable components. We provide extensive analysis with automated and human-based metrics of the generated output.<|reference_end|> | arxiv | @article{huot2024agents',
title={Agents' Room: Narrative Generation through Multi-step Collaboration},
author={Fantine Huot, Reinald Kim Amplayo, Jennimaria Palomaki, Alice Shoshana
Jakobovits, Elizabeth Clark, Mirella Lapata},
journal={arXiv preprint arXiv:2410.02603},
year={2024},
archivePrefix={arXiv},
eprint={2410.02603},
primaryClass={cs.CL cs.LG cs.MA}
} | huot2024agents' |
arxiv-665120 | 2410.02604 | Long-Sequence Recommendation Models Need Decoupled Embeddings | <|reference_start|>Long-Sequence Recommendation Models Need Decoupled Embeddings: Lifelong user behavior sequences, comprising up to tens of thousands of history behaviors, are crucial for capturing user interests and predicting user responses in modern recommendation systems. A two-stage paradigm is typically adopted to handle these long sequences: a few relevant behaviors are first searched from the original long sequences via an attention mechanism in the first stage and then aggregated with the target item to construct a discriminative representation for prediction in the second stage. In this work, we identify and characterize, for the first time, a neglected deficiency in existing long-sequence recommendation models: a single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes. Initial attempts to address this issue using linear projections -- a technique borrowed from language processing -- proved ineffective, shedding light on the unique challenges of recommendation models. To overcome this, we propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are initialized and learned separately to fully decouple attention and representation. Extensive experiments and analysis demonstrate that DARE provides more accurate search of correlated behaviors and outperforms baselines with AUC gains up to 0.9% on public datasets and notable online system improvements. Furthermore, decoupling embedding spaces allows us to reduce the attention embedding dimension and accelerate the search procedure by 50% without significant performance impact, enabling more efficient, high-performance online serving.<|reference_end|> | arxiv | @article{feng2024long-sequence,
title={Long-Sequence Recommendation Models Need Decoupled Embeddings},
author={Ningya Feng, Junwei Pan, Jialong Wu, Baixu Chen, Ximei Wang, Qian Li,
Xian Hu, Jie Jiang, Mingsheng Long},
journal={arXiv preprint arXiv:2410.02604},
year={2024},
archivePrefix={arXiv},
eprint={2410.02604},
primaryClass={cs.IR cs.LG}
} | feng2024long-sequence |
arxiv-665121 | 2410.02605 | Beyond Expected Returns: A Policy Gradient Algorithm for Cumulative Prospect Theoretic Reinforcement Learning | <|reference_start|>Beyond Expected Returns: A Policy Gradient Algorithm for Cumulative Prospect Theoretic Reinforcement Learning: The widely used expected utility theory has been shown to be empirically inconsistent with human preferences in the psychology and behavioral economy literatures. Cumulative Prospect Theory (CPT) has been developed to fill in this gap and provide a better model for human-based decision-making supported by empirical evidence. It allows to express a wide range of attitudes and perceptions towards risk, gains and losses. A few years ago, CPT has been combined with Reinforcement Learning (RL) to formulate a CPT policy optimization problem where the goal of the agent is to search for a policy generating long-term returns which are aligned with their preferences. In this work, we revisit this policy optimization problem and provide new insights on optimal policies and their nature depending on the utility function under consideration. We further derive a novel policy gradient theorem for the CPT policy optimization objective generalizing the seminal corresponding result in standard RL. This result enables us to design a model-free policy gradient algorithm to solve the CPT-RL problem. We illustrate the performance of our algorithm in simple examples motivated by traffic control and electricity management applications. We also demonstrate that our policy gradient algorithm scales better to larger state spaces compared to the existing zeroth order algorithm for solving the same problem.<|reference_end|> | arxiv | @article{lepel2024beyond,
title={Beyond Expected Returns: A Policy Gradient Algorithm for Cumulative
Prospect Theoretic Reinforcement Learning},
author={Olivier Lepel, Anas Barakat},
journal={arXiv preprint arXiv:2410.02605},
year={2024},
archivePrefix={arXiv},
eprint={2410.02605},
primaryClass={cs.LG cs.AI}
} | lepel2024beyond |
arxiv-665122 | 2410.02606 | Can You Link Up With Treewidth? | <|reference_start|>Can You Link Up With Treewidth?: A central result of Marx [ToC '10] proves that there are $k$-vertex graphs $H$ of maximum degree $3$ such that $n^{o(k /\log k)}$ time algorithms for detecting colorful $H$-subgraphs would refute the Exponential-Time Hypothesis (ETH). This result is widely used to obtain almost-tight conditional lower bounds for parameterized problems under ETH. Our first contribution is a new and fully self-contained proof of this result that further simplifies a recent work by Karthik et al. [SOSA 2024]. Towards this end, we introduce a novel graph parameter, the linkage capacity $\gamma(H)$, and show with an elementary proof that detecting colorful $H$-subgraphs in time $n^{o(\gamma(H))}$ refutes ETH. Then, we use a simple construction of communication networks credited to Bene\v{s} to obtain $k$-vertex graphs of maximum degree $3$ and linkage capacity $\Omega(k / \log k)$, avoiding the use of expander graphs. We also show that every graph $H$ of treewidth $t$ has linkage capacity $\Omega(t / \log t)$, thus recovering the stronger result of Marx [ToC '10] with a simplified proof. Additionally, we obtain new tight lower bounds for certain types of patterns by analyzing their linkage capacity. For example, we prove that almost all $k$-vertex graphs of polynomial average degree $\Omega(k^{\beta})$ for some $\beta > 0$ have linkage capacity $\Theta(k)$, which implies tight lower bounds for such patterns $H$. As an application of these results, we also obtain tight lower bounds for counting small induced subgraphs having a certain property $\Phi$, improving bounds from [Roth et al., FOCS 2020].<|reference_end|> | arxiv | @article{curticapean2024can,
title={Can You Link Up With Treewidth?},
author={Radu Curticapean, Simon D"oring, Daniel Neuen, Jiaheng Wang},
journal={arXiv preprint arXiv:2410.02606},
year={2024},
archivePrefix={arXiv},
eprint={2410.02606},
primaryClass={cs.DS cs.CC}
} | curticapean2024can |
arxiv-665123 | 2410.02609 | Ethio-Fake: Cutting-Edge Approaches to Combat Fake News in Under-Resourced Languages Using Explainable AI | <|reference_start|>Ethio-Fake: Cutting-Edge Approaches to Combat Fake News in Under-Resourced Languages Using Explainable AI: The proliferation of fake news has emerged as a significant threat to the integrity of information dissemination, particularly on social media platforms. Misinformation can spread quickly due to the ease of creating and disseminating content, affecting public opinion and sociopolitical events. Identifying false information is therefore essential to reducing its negative consequences and maintaining the reliability of online news sources. Traditional approaches to fake news detection often rely solely on content-based features, overlooking the crucial role of social context in shaping the perception and propagation of news articles. In this paper, we propose a comprehensive approach that integrates social context-based features with news content features to enhance the accuracy of fake news detection in under-resourced languages. We perform several experiments utilizing a variety of methodologies, including traditional machine learning, neural networks, ensemble learning, and transfer learning. Assessment of the outcomes of the experiments shows that the ensemble learning approach has the highest accuracy, achieving a 0.99 F1 score. Additionally, when compared with monolingual models, the fine-tuned model with the target language outperformed others, achieving a 0.94 F1 score. We analyze the functioning of the models, considering the important features that contribute to model performance, using explainable AI techniques.<|reference_end|> | arxiv | @article{yigezu2024ethio-fake:,
title={Ethio-Fake: Cutting-Edge Approaches to Combat Fake News in
Under-Resourced Languages Using Explainable AI},
author={Mesay Gemeda Yigezu, Melkamu Abay Mersha, Girma Yohannis Bade, Jugal
Kalita, Olga Kolesnikova, Alexander Gelbukh},
journal={ACLing 2024: 6th International Conference on AI in Computational
Linguistics},
year={2024},
archivePrefix={arXiv},
eprint={2410.02609},
primaryClass={cs.CL}
} | yigezu2024ethio-fake: |
arxiv-665124 | 2410.02610 | Research Directions and Modeling Guidelines for Industrial Internet of Things Applications | <|reference_start|>Research Directions and Modeling Guidelines for Industrial Internet of Things Applications: The Industrial Internet of Things (IIoT) paradigm has emerged as a transformative force, revolutionizing industrial processes by integrating advanced wireless technologies into traditional procedures to enhance their efficiency. The importance of this paradigm shift has produced a massive, yet heterogeneous, proliferation of scientific contributions. However, these works lack a standardized and cohesive characterization of the IIoT framework coming from different entities, like the 3rd Generation Partnership Project (3GPP) or the 5G Alliance for Connected Industries and Automation (5G-ACIA), resulting in divergent perspectives and potentially hindering interoperability. To bridge this gap, this article offers a unified characterization of (i) the main IIoT application domains, (ii) their respective requirements, (iii) the principal technological gaps existing in the current literature, and, most importantly, (iv) we propose a systematic approach for assessing and addressing the identified research challenges. Therefore, this article serves as a roadmap for future research endeavors, promoting a unified vision of the IIoT paradigm and fostering collaborative efforts to advance the field.<|reference_end|> | arxiv | @article{cuozzo2024research,
title={Research Directions and Modeling Guidelines for Industrial Internet of
Things Applications},
author={Giampaolo Cuozzo, Enrico Testi, Salvatore Riolo, Luciano Miuccio,
Gianluca Cena, Gianni Pasolini, Luca De Nardis, Daniela Panno, Marco Chiani,
Maria-Gabriella Di Benedetto, Enrico Buracchini, Roberto Verdone},
journal={arXiv preprint arXiv:2410.02610},
year={2024},
archivePrefix={arXiv},
eprint={2410.02610},
primaryClass={cs.NI}
} | cuozzo2024research |
arxiv-665125 | 2410.02611 | IndicSentEval: How Effectively do Multilingual Transformer Models encode Linguistic Properties for Indic Languages? | <|reference_start|>IndicSentEval: How Effectively do Multilingual Transformer Models encode Linguistic Properties for Indic Languages?: Transformer-based models have revolutionized the field of natural language processing. To understand why they perform so well and to assess their reliability, several studies have focused on questions such as: Which linguistic properties are encoded by these models, and to what extent? How robust are these models in encoding linguistic properties when faced with perturbations in the input text? However, these studies have mainly focused on BERT and the English language. In this paper, we investigate similar questions regarding encoding capability and robustness for 8 linguistic properties across 13 different perturbations in 6 Indic languages, using 9 multilingual Transformer models (7 universal and 2 Indic-specific). To conduct this study, we introduce a novel multilingual benchmark dataset, IndicSentEval, containing approximately $\sim$47K sentences. Surprisingly, our probing analysis of surface, syntactic, and semantic properties reveals that while almost all multilingual models demonstrate consistent encoding performance for English, they show mixed results for Indic languages. As expected, Indic-specific multilingual models capture linguistic properties in Indic languages better than universal models. Intriguingly, universal models broadly exhibit better robustness compared to Indic-specific models, particularly under perturbations such as dropping both nouns and verbs, dropping only verbs, or keeping only nouns. Overall, this study provides valuable insights into probing and perturbation-specific strengths and weaknesses of popular multilingual Transformer-based models for different Indic languages. We make our code and dataset publicly available [https://tinyurl.com/IndicSentEval}].<|reference_end|> | arxiv | @article{aravapalli2024indicsenteval:,
title={IndicSentEval: How Effectively do Multilingual Transformer Models encode
Linguistic Properties for Indic Languages?},
author={Akhilesh Aravapalli, Mounika Marreddy, Subba Reddy Oota, Radhika
Mamidi, Manish Gupta},
journal={arXiv preprint arXiv:2410.02611},
year={2024},
archivePrefix={arXiv},
eprint={2410.02611},
primaryClass={cs.CL cs.AI cs.LG}
} | aravapalli2024indicsenteval: |
arxiv-665126 | 2410.02613 | NL-Eye: Abductive NLI for Images | <|reference_start|>NL-Eye: Abductive NLI for Images: Will a Visual Language Model (VLM)-based bot warn us about slipping if it detects a wet floor? Recent VLMs have demonstrated impressive capabilities, yet their ability to infer outcomes and causes remains underexplored. To address this, we introduce NL-Eye, a benchmark designed to assess VLMs' visual abductive reasoning skills. NL-Eye adapts the abductive Natural Language Inference (NLI) task to the visual domain, requiring models to evaluate the plausibility of hypothesis images based on a premise image and explain their decisions. NL-Eye consists of 350 carefully curated triplet examples (1,050 images) spanning diverse reasoning categories: physical, functional, logical, emotional, cultural, and social. The data curation process involved two steps - writing textual descriptions and generating images using text-to-image models, both requiring substantial human involvement to ensure high-quality and challenging scenes. Our experiments show that VLMs struggle significantly on NL-Eye, often performing at random baseline levels, while humans excel in both plausibility prediction and explanation quality. This demonstrates a deficiency in the abductive reasoning capabilities of modern VLMs. NL-Eye represents a crucial step toward developing VLMs capable of robust multimodal reasoning for real-world applications, including accident-prevention bots and generated video verification.<|reference_end|> | arxiv | @article{ventura2024nl-eye:,
title={NL-Eye: Abductive NLI for Images},
author={Mor Ventura, Michael Toker, Nitay Calderon, Zorik Gekhman, Yonatan
Bitton, Roi Reichart},
journal={arXiv preprint arXiv:2410.02613},
year={2024},
archivePrefix={arXiv},
eprint={2410.02613},
primaryClass={cs.CV cs.AI cs.CL}
} | ventura2024nl-eye: |
arxiv-665127 | 2410.02615 | LoGra-Med: Long Context Multi-Graph Alignment for Medical Vision-Language Model | <|reference_start|>LoGra-Med: Long Context Multi-Graph Alignment for Medical Vision-Language Model: State-of-the-art medical multi-modal large language models (med-MLLM), like LLaVA-Med or BioMedGPT, leverage instruction-following data in pre-training. However, those models primarily focus on scaling the model size and data volume to boost performance while mainly relying on the autoregressive learning objectives. Surprisingly, we reveal that such learning schemes might result in a weak alignment between vision and language modalities, making these models highly reliant on extensive pre-training datasets - a significant challenge in medical domains due to the expensive and time-consuming nature of curating high-quality instruction-following instances. We address this with LoGra-Med, a new multi-graph alignment algorithm that enforces triplet correlations across image modalities, conversation-based descriptions, and extended captions. This helps the model capture contextual meaning, handle linguistic variability, and build cross-modal associations between visuals and text. To scale our approach, we designed an efficient end-to-end learning scheme using black-box gradient estimation, enabling faster LLaMa 7B training. Our results show LoGra-Med matches LLAVA-Med performance on 600K image-text pairs for Medical VQA and significantly outperforms it when trained on 10% of the data. For example, on VQA-RAD, we exceed LLAVA-Med by 20.13% and nearly match the 100% pre-training score (72.52% vs. 72.64%). We also surpass SOTA methods like BiomedGPT on visual chatbots and RadFM on zero-shot image classification with VQA, highlighting the effectiveness of multi-graph alignment.<|reference_end|> | arxiv | @article{nguyen2024logra-med:,
title={LoGra-Med: Long Context Multi-Graph Alignment for Medical
Vision-Language Model},
author={Duy M. H. Nguyen, Nghiem T. Diep, Trung Q. Nguyen, Hoang-Bao Le, Tai
Nguyen, Tien Nguyen, TrungTin Nguyen, Nhat Ho, Pengtao Xie, Roger
Wattenhofer, James Zhou, Daniel Sonntag, Mathias Niepert},
journal={arXiv preprint arXiv:2410.02615},
year={2024},
archivePrefix={arXiv},
eprint={2410.02615},
primaryClass={cs.LG}
} | nguyen2024logra-med: |
arxiv-665128 | 2410.02618 | Achieving Fairness in Predictive Process Analytics via Adversarial Learning (Extended Version) | <|reference_start|>Achieving Fairness in Predictive Process Analytics via Adversarial Learning (Extended Version): Predictive business process analytics has become important for organizations, offering real-time operational support for their processes. However, these algorithms often perform unfair predictions because they are based on biased variables (e.g., gender or nationality), namely variables embodying discrimination. This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics to ensure that predictions are not influenced by biased variables. Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value. The proposed technique is also compared with the state of the art in fairness in process mining, illustrating that our framework allows for a more enhanced level of fairness, while retaining a better prediction quality.<|reference_end|> | arxiv | @article{de leoni2024achieving,
title={Achieving Fairness in Predictive Process Analytics via Adversarial
Learning (Extended Version)},
author={Massimiliano de Leoni, Alessandro Padella},
journal={arXiv preprint arXiv:2410.02618},
year={2024},
archivePrefix={arXiv},
eprint={2410.02618},
primaryClass={cs.AI cs.LG}
} | de leoni2024achieving |
arxiv-665129 | 2410.02619 | GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering | <|reference_start|>GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering: We present GI-GS, a novel inverse rendering framework that leverages 3D Gaussian Splatting (3DGS) and deferred shading to achieve photo-realistic novel view synthesis and relighting. In inverse rendering, accurately modeling the shading processes of objects is essential for achieving high-fidelity results. Therefore, it is critical to incorporate global illumination to account for indirect lighting that reaches an object after multiple bounces across the scene. Previous 3DGS-based methods have attempted to model indirect lighting by characterizing indirect illumination as learnable lighting volumes or additional attributes of each Gaussian, while using baked occlusion to represent shadow effects. These methods, however, fail to accurately model the complex physical interactions between light and objects, making it impossible to construct realistic indirect illumination during relighting. To address this limitation, we propose to calculate indirect lighting using efficient path tracing with deferred shading. In our framework, we first render a G-buffer to capture the detailed geometry and material properties of the scene. Then, we perform physically-based rendering (PBR) only for direct lighting. With the G-buffer and previous rendering results, the indirect lighting can be calculated through a lightweight path tracing. Our method effectively models indirect lighting under any given lighting conditions, thereby achieving better novel view synthesis and relighting. Quantitative and qualitative results show that our GI-GS outperforms existing baselines in both rendering quality and efficiency.<|reference_end|> | arxiv | @article{chen2024gi-gs:,
title={GI-GS: Global Illumination Decomposition on Gaussian Splatting for
Inverse Rendering},
author={Hongze Chen, Zehong Lin, Jun Zhang},
journal={arXiv preprint arXiv:2410.02619},
year={2024},
archivePrefix={arXiv},
eprint={2410.02619},
primaryClass={cs.CV}
} | chen2024gi-gs: |
arxiv-665130 | 2410.02622 | Diss-l-ECT: Dissecting Graph Data with local Euler Characteristic Transforms | <|reference_start|>Diss-l-ECT: Dissecting Graph Data with local Euler Characteristic Transforms: The Euler Characteristic Transform (ECT) is an efficiently-computable geometrical-topological invariant that characterizes the global shape of data. In this paper, we introduce the Local Euler Characteristic Transform ($\ell$-ECT), a novel extension of the ECT particularly designed to enhance expressivity and interpretability in graph representation learning. Unlike traditional Graph Neural Networks (GNNs), which may lose critical local details through aggregation, the $\ell$-ECT provides a lossless representation of local neighborhoods. This approach addresses key limitations in GNNs by preserving nuanced local structures while maintaining global interpretability. Moreover, we construct a rotation-invariant metric based on $\ell$-ECTs for spatial alignment of data spaces. Our method exhibits superior performance than standard GNNs on a variety of node classification tasks, particularly in graphs with high heterophily.<|reference_end|> | arxiv | @article{von rohrscheidt2024diss-l-ect:,
title={Diss-l-ECT: Dissecting Graph Data with local Euler Characteristic
Transforms},
author={Julius von Rohrscheidt and Bastian Rieck},
journal={arXiv preprint arXiv:2410.02622},
year={2024},
archivePrefix={arXiv},
eprint={2410.02622},
primaryClass={cs.LG math.AT}
} | von rohrscheidt2024diss-l-ect: |
arxiv-665131 | 2410.02623 | Ranking Perspective for Tree-based Methods with Applications to Symbolic Feature Selection | <|reference_start|>Ranking Perspective for Tree-based Methods with Applications to Symbolic Feature Selection: Tree-based methods are powerful nonparametric techniques in statistics and machine learning. However, their effectiveness, particularly in finite-sample settings, is not fully understood. Recent applications have revealed their surprising ability to distinguish transformations (which we call symbolic feature selection) that remain obscure under current theoretical understanding. This work provides a finite-sample analysis of tree-based methods from a ranking perspective. We link oracle partitions in tree methods to response rankings at local splits, offering new insights into their finite-sample behavior in regression and feature selection tasks. Building on this local ranking perspective, we extend our analysis in two ways: (i) We examine the global ranking performance of individual trees and ensembles, including Classification and Regression Trees (CART) and Bayesian Additive Regression Trees (BART), providing finite-sample oracle bounds, ranking consistency, and posterior contraction results. (ii) Inspired by the ranking perspective, we propose concordant divergence statistics $\mathcal{T}_0$ to evaluate symbolic feature mappings and establish their properties. Numerical experiments demonstrate the competitive performance of these statistics in symbolic feature selection tasks compared to existing methods.<|reference_end|> | arxiv | @article{luo2024ranking,
title={Ranking Perspective for Tree-based Methods with Applications to Symbolic
Feature Selection},
author={Hengrui Luo and Meng Li},
journal={arXiv preprint arXiv:2410.02623},
year={2024},
archivePrefix={arXiv},
eprint={2410.02623},
primaryClass={math.ST cs.NA math.NA stat.ML stat.TH}
} | luo2024ranking |
arxiv-665132 | 2410.02626 | Online Learning Guided Quasi-Newton Methods with Global Non-Asymptotic Convergence | <|reference_start|>Online Learning Guided Quasi-Newton Methods with Global Non-Asymptotic Convergence: In this paper, we propose a quasi-Newton method for solving smooth and monotone nonlinear equations, including unconstrained minimization and minimax optimization as special cases. For the strongly monotone setting, we establish two global convergence bounds: (i) a linear convergence rate that matches the rate of the celebrated extragradient method, and (ii) an explicit global superlinear convergence rate that provably surpasses the linear convergence rate after at most ${O}(d)$ iterations, where $d$ is the problem's dimension. In addition, for the case where the operator is only monotone, we prove a global convergence rate of ${O}(\min\{{1}/{k},{\sqrt{d}}/{k^{1.25}}\})$ in terms of the duality gap. This matches the rate of the extragradient method when $k = {O}(d^2)$ and is faster when $k = \Omega(d^2)$. These results are the first global convergence results to demonstrate a provable advantage of a quasi-Newton method over the extragradient method, without querying the Jacobian of the operator. Unlike classical quasi-Newton methods, we achieve this by using the hybrid proximal extragradient framework and a novel online learning approach for updating the Jacobian approximation matrices. Specifically, guided by the convergence analysis, we formulate the Jacobian approximation update as an online convex optimization problem over non-symmetric matrices, relating the regret of the online problem to the convergence rate of our method. To facilitate efficient implementation, we further develop a tailored online learning algorithm based on an approximate separation oracle, which preserves structures such as symmetry and sparsity in the Jacobian matrices.<|reference_end|> | arxiv | @article{jiang2024online,
title={Online Learning Guided Quasi-Newton Methods with Global Non-Asymptotic
Convergence},
author={Ruichen Jiang, Aryan Mokhtari},
journal={arXiv preprint arXiv:2410.02626},
year={2024},
archivePrefix={arXiv},
eprint={2410.02626},
primaryClass={math.OC cs.LG stat.ML}
} | jiang2024online |
arxiv-665133 | 2410.02627 | Preparing for Super-Reactivity: Early Fault-Detection in the Development of Exceedingly Complex Reactive Systems | <|reference_start|>Preparing for Super-Reactivity: Early Fault-Detection in the Development of Exceedingly Complex Reactive Systems: We introduce the term Super-Reactive Systems to refer to reactive systems whose construction and behavior are complex, constantly changing and evolving, and heavily interwoven with other systems and the physical world. Finding hidden faults in such systems early in planning and development is critical for human safety, the environment, society and the economy. However, the complexity of the system and its interactions and the absence of adequate technical details pose a great obstacle. We propose an architecture for models and tools to overcome such barriers and enable simulation, systematic analysis, and fault detection and handling, early in the development of super-reactive systems. The approach is facilitated by the inference and abstraction capabilities and the power and knowledge afforded by large language models and associated AI tools. It is based on: (i) deferred, just-in-time interpretation of model elements that are stored in natural language form, and (ii) early capture of tacit interdependencies among seemingly orthogonal requirements.<|reference_end|> | arxiv | @article{harel2024preparing,
title={Preparing for Super-Reactivity: Early Fault-Detection in the Development
of Exceedingly Complex Reactive Systems},
author={David Harel and Assaf Marron},
journal={arXiv preprint arXiv:2410.02627},
year={2024},
archivePrefix={arXiv},
eprint={2410.02627},
primaryClass={cs.SE}
} | harel2024preparing |
arxiv-665134 | 2410.02628 | Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization | <|reference_start|>Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization: Learning conditional distributions $\pi^*(\cdot|x)$ is a central problem in machine learning, which is typically approached via supervised methods with paired data $(x,y) \sim \pi^*$. However, acquiring paired data samples is often challenging, especially in problems such as domain translation. This necessitates the development of $\textit{semi-supervised}$ models that utilize both limited paired data and additional unpaired i.i.d. samples $x \sim \pi^*_x$ and $y \sim \pi^*_y$ from the marginal distributions. The usage of such combined data is complex and often relies on heuristic approaches. To tackle this issue, we propose a new learning paradigm that integrates both paired and unpaired data $\textbf{seamlessly}$ through the data likelihood maximization techniques. We demonstrate that our approach also connects intriguingly with inverse entropic optimal transport (OT). This finding allows us to apply recent advances in computational OT to establish a $\textbf{light}$ learning algorithm to get $\pi^*(\cdot|x)$. Furthermore, we demonstrate through empirical tests that our method effectively learns conditional distributions using paired and unpaired data simultaneously.<|reference_end|> | arxiv | @article{persiianov2024inverse,
title={Inverse Entropic Optimal Transport Solves Semi-supervised Learning via
Data Likelihood Maximization},
author={Mikhail Persiianov, Arip Asadulaev, Nikita Andreev, Nikita
Starodubcev, Dmitry Baranchuk, Anastasis Kratsios, Evgeny Burnaev and
Alexander Korotin},
journal={arXiv preprint arXiv:2410.02628},
year={2024},
archivePrefix={arXiv},
eprint={2410.02628},
primaryClass={cs.LG cs.AI}
} | persiianov2024inverse |
arxiv-665135 | 2410.02629 | Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression | <|reference_start|>Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression: This paper studies the generalization performance of iterates obtained by Gradient Descent (GD), Stochastic Gradient Descent (SGD) and their proximal variants in high-dimensional robust regression problems. The number of features is comparable to the sample size and errors may be heavy-tailed. We introduce estimators that precisely track the generalization error of the iterates along the trajectory of the iterative algorithm. These estimators are provably consistent under suitable conditions. The results are illustrated through several examples, including Huber regression, pseudo-Huber regression, and their penalized variants with non-smooth regularizer. We provide explicit generalization error estimates for iterates generated from GD and SGD, or from proximal SGD in the presence of a non-smooth regularizer. The proposed risk estimates serve as effective proxies for the actual generalization error, allowing us to determine the optimal stopping iteration that minimizes the generalization error. Extensive simulations confirm the effectiveness of the proposed generalization error estimates.<|reference_end|> | arxiv | @article{tan2024estimating,
title={Estimating Generalization Performance Along the Trajectory of Proximal
SGD in Robust Regression},
author={Kai Tan, Pierre C. Bellec},
journal={arXiv preprint arXiv:2410.02629},
year={2024},
archivePrefix={arXiv},
eprint={2410.02629},
primaryClass={math.ST cs.LG stat.ME stat.TH}
} | tan2024estimating |
arxiv-665136 | 2410.02630 | Metrics Revolutions: Groundbreaking Insights into the Implementation of Metrics for Biomedical Image Segmentation | <|reference_start|>Metrics Revolutions: Groundbreaking Insights into the Implementation of Metrics for Biomedical Image Segmentation: The evaluation of segmentation performance is a common task in biomedical image analysis, with its importance emphasized in the recently released metrics selection guidelines and computing frameworks. To quantitatively evaluate the alignment of two segmentations, researchers commonly resort to counting metrics, such as the Dice similarity coefficient, or distance-based metrics, such as the Hausdorff distance, which are usually computed by publicly available open-source tools with an inherent assumption that these tools provide consistent results. In this study we questioned this assumption, and performed a systematic implementation analysis along with quantitative experiments on real-world clinical data to compare 11 open-source tools for distance-based metrics computation against our highly accurate mesh-based reference implementation. The results revealed that statistically significant differences among all open-source tools are both surprising and concerning, since they question the validity of existing studies. Besides identifying the main sources of variation, we also provide recommendations for distance-based metrics computation.<|reference_end|> | arxiv | @article{podobnik2024metrics,
title={Metrics Revolutions: Groundbreaking Insights into the Implementation of
Metrics for Biomedical Image Segmentation},
author={Gav{s}per Podobnik and Tomav{z} Vrtovec},
journal={arXiv preprint arXiv:2410.02630},
year={2024},
archivePrefix={arXiv},
eprint={2410.02630},
primaryClass={cs.CV}
} | podobnik2024metrics |
arxiv-665137 | 2410.02631 | Large Language Model for Multi-Domain Translation: Benchmarking and Domain CoT Fine-tuning | <|reference_start|>Large Language Model for Multi-Domain Translation: Benchmarking and Domain CoT Fine-tuning: Achieving consistent high-quality machine translation (MT) across diverse domains remains a significant challenge, primarily due to the limited and imbalanced parallel training data available in various domains. While large language models (LLMs) have demonstrated impressive general understanding and generation abilities, their potential in multi-domain MT is under-explored. We establish a comprehensive benchmark for multi-domain translation, featuring 25 German$\Leftrightarrow$English and 22 Chinese$\Leftrightarrow$English test sets respectively covering 15 domains. Our evaluation of prominent LLMs reveals a discernible performance gap against traditional MT systems, highlighting domain overfitting and catastrophic forgetting issues after fine-tuning on domain-limited corpora. To mitigate this, we propose a domain Chain of Thought (CoT) fine-tuning technique that utilizes the intrinsic multi-domain intelligence of LLMs to improve translation performance. This method inspires the LLM to perceive domain information from the source text, which then serves as a helpful hint to guide the translation process. Despite being trained on a small dataset of four domains, our CoT fine-tune approach achieves notable enhancements in translation accuracy and domain robustness than traditional fine-tuning, as evidenced by an average 1.53 BLEU score increase in over 20 German$\rightarrow$English distinct out-of-domain tests.<|reference_end|> | arxiv | @article{hu2024large,
title={Large Language Model for Multi-Domain Translation: Benchmarking and
Domain CoT Fine-tuning},
author={Tianxiang Hu, Pei Zhang, Baosong Yang, Jun Xie, Derek F. Wong, Rui
Wang},
journal={arXiv preprint arXiv:2410.02631},
year={2024},
archivePrefix={arXiv},
eprint={2410.02631},
primaryClass={cs.CL}
} | hu2024large |
arxiv-665138 | 2410.02634 | When is local search both effective and efficient? | <|reference_start|>When is local search both effective and efficient?: Combinatorial optimization problems define fitness landscapes that combine the numerics of the 'fitness' function to be maximized with the combinatorics of which assignments are adjacent. Local search starts at an initial assignment in this landscape and successively moves to assignments until no further improvement is possible among the adjacent assignments. Classic analyses of local search algorithms have focused mostly on the question of effectiveness ("did the algorithm find a good solution?") and often implicitly assumed that there are no doubts about their efficiency ("did the algorithm find the solution quickly?"). But there are many reasons to doubt the efficiency of local search. Many local search algorithms are known to be inefficient even if we focus on fitness landscapes on the hypercube that are single peaked on every subcube (known as semismooth fitness landscapes, completely unimodal pseudo-Boolean functions, or acyclic unique sink orientations). Here, we want to identify the most expressive subclass of single-peaked binary Boolean valued constraint satisfaction problems for which many popular local search algorithms are efficient. In this paper, we introduce the class of conditionally-smooth fitness landscapes where the preferred assignment of a variable xj depends only on the assignments of variables xi with i less than j in an associated partial order. We prove that many popular local search algorithms like random ascent, simulated annealing, various jumping rules, and the Kernighan-Lin heuristic are very efficient on conditionally-smooth landscapes. Some other popular local search algorithms like steepest ascent and random facet, however, still require a super-polynomial number of steps on these landscapes. Our hope is to contribute to a fuller understanding of what properties fitness landscapes must have for local search algorithms to be both effective and efficient.<|reference_end|> | arxiv | @article{kaznatcheev2024when,
title={When is local search both effective and efficient?},
author={Artem Kaznatcheev and Sofia Vazquez Alferez},
journal={arXiv preprint arXiv:2410.02634},
year={2024},
archivePrefix={arXiv},
eprint={2410.02634},
primaryClass={cs.DS cs.DM q-bio.PE}
} | kaznatcheev2024when |
arxiv-665139 | 2410.02636 | Inapproximability of Sparsest Vector in a Real Subspace | <|reference_start|>Inapproximability of Sparsest Vector in a Real Subspace: We establish strong inapproximability for finding the sparsest nonzero vector in a real subspace. We show that it is NP-Hard (under randomized reductions) to approximate the sparsest vector in a subspace within any constant factor (or almost polynomial factors in quasipolynomial time). We recover as a corollary state of the art inapproximability for the shortest vector problem (SVP), a foundational problem in lattice based cryptography. Our proof is surprisingly simple, bypassing even the PCP theorem. We are inspired by the homogenization framework from the inapproximability theory of minimum distance problems (MDC) in integer lattices and error correcting codes. We use a combination of (a) \emph{product testing via tensor codes} and (b) \emph{encoding an assignment as a coset of a random code in higher dimensional space} in order to embed non-homogeneous quadratic equations into the sparsest vector problem. (a) is inspired by Austrin and Khot's simplified proof of hardness of MDC over finite fields, and (b) is inspired by Micciancio's semi-derandomization of hardness of SVP. Our reduction involves the challenge of performing (a) over the reals. We prove that tensoring of the kernel of a +1/-1 random matrix furnishes an adequate product test (while still allowing (b)). The proof exposes a connection to Littlewood-Offord theory and relies on a powerful anticoncentration result of Rudelson and Vershynin. Our main motivation in this work is the development of inapproximability theory for problems over the reals. Analytic variants of sparsest vector have connections to small set expansion, quantum separability and polynomial maximization over convex sets, all of which cause similar barriers to inapproximability. The approach we develop could lead to progress on the hardness of some of these problems.<|reference_end|> | arxiv | @article{bhattiprolu2024inapproximability,
title={Inapproximability of Sparsest Vector in a Real Subspace},
author={Vijay Bhattiprolu and Euiwoong Lee},
journal={arXiv preprint arXiv:2410.02636},
year={2024},
archivePrefix={arXiv},
eprint={2410.02636},
primaryClass={cs.CC cs.CR}
} | bhattiprolu2024inapproximability |
arxiv-665140 | 2410.02637 | Plots Unlock Time-Series Understanding in Multimodal Models | <|reference_start|>Plots Unlock Time-Series Understanding in Multimodal Models: While multimodal foundation models can now natively work with data beyond text, they remain underutilized in analyzing the considerable amounts of multi-dimensional time-series data in fields like healthcare, finance, and social sciences, representing a missed opportunity for richer, data-driven insights. This paper proposes a simple but effective method that leverages the existing vision encoders of these models to "see" time-series data via plots, avoiding the need for additional, potentially costly, model training. Our empirical evaluations show that this approach outperforms providing the raw time-series data as text, with the additional benefit that visual time-series representations demonstrate up to a 90% reduction in model API costs. We validate our hypothesis through synthetic data tasks of increasing complexity, progressing from simple functional form identification on clean data, to extracting trends from noisy scatter plots. To demonstrate generalizability from synthetic tasks with clear reasoning steps to more complex, real-world scenarios, we apply our approach to consumer health tasks - specifically fall detection, activity recognition, and readiness assessment - which involve heterogeneous, noisy data and multi-step reasoning. The overall success in plot performance over text performance (up to an 120% performance increase on zero-shot synthetic tasks, and up to 150% performance increase on real-world tasks), across both GPT and Gemini model families, highlights our approach's potential for making the best use of the native capabilities of foundation models.<|reference_end|> | arxiv | @article{daswani2024plots,
title={Plots Unlock Time-Series Understanding in Multimodal Models},
author={Mayank Daswani, Mathias M.J. Bellaiche, Marc Wilson, Desislav Ivanov,
Mikhail Papkov, Eva Schnider, Jing Tang, Kay Lamerigts, Gabriela Botea,
Michael A. Sanchez, Yojan Patel, Shruthi Prabhakara, Shravya Shetty, Umesh
Telang},
journal={arXiv preprint arXiv:2410.02637},
year={2024},
archivePrefix={arXiv},
eprint={2410.02637},
primaryClass={cs.AI cs.CV}
} | daswani2024plots |
arxiv-665141 | 2410.02638 | Spatial-Temporal Multi-Cuts for Online Multiple-Camera Vehicle Tracking | <|reference_start|>Spatial-Temporal Multi-Cuts for Online Multiple-Camera Vehicle Tracking: Accurate online multiple-camera vehicle tracking is essential for intelligent transportation systems, autonomous driving, and smart city applications. Like single-camera multiple-object tracking, it is commonly formulated as a graph problem of tracking-by-detection. Within this framework, existing online methods usually consist of two-stage procedures that cluster temporally first, then spatially, or vice versa. This is computationally expensive and prone to error accumulation. We introduce a graph representation that allows spatial-temporal clustering in a single, combined step: New detections are spatially and temporally connected with existing clusters. By keeping sparse appearance and positional cues of all detections in a cluster, our method can compare clusters based on the strongest available evidence. The final tracks are obtained online using a simple multicut assignment procedure. Our method does not require any training on the target scene, pre-extraction of single-camera tracks, or additional annotations. Notably, we outperform the online state-of-the-art on the CityFlow dataset in terms of IDF1 by more than 14%, and on the Synthehicle dataset by more than 25%, respectively. The code is publicly available.<|reference_end|> | arxiv | @article{herzog2024spatial-temporal,
title={Spatial-Temporal Multi-Cuts for Online Multiple-Camera Vehicle Tracking},
author={Fabian Herzog, Johannes Gilg, Philipp Wolters, Torben Teepe, and
Gerhard Rigoll},
journal={arXiv preprint arXiv:2410.02638},
year={2024},
archivePrefix={arXiv},
eprint={2410.02638},
primaryClass={cs.CV}
} | herzog2024spatial-temporal |
arxiv-665142 | 2410.02639 | Labor Migration Modeling through Large-scale Job Query Data | <|reference_start|>Labor Migration Modeling through Large-scale Job Query Data: Accurate and timely modeling of labor migration is crucial for various urban governance and commercial tasks, such as local policy-making and business site selection. However, existing studies on labor migration largely rely on limited survey data with statistical methods, which fail to deliver timely and fine-grained insights for time-varying regional trends. To this end, we propose a deep learning-based spatial-temporal labor migration analysis framework, DHG-SIL, by leveraging large-scale job query data. Specifically, we first acquire labor migration intention as a proxy of labor migration via job queries from one of the world's largest search engines. Then, a Disprepant Homophily co-preserved Graph Convolutional Network (DH-GCN) and an interpretable temporal module are respectively proposed to capture cross-city and sequential labor migration dependencies. Besides, we introduce four interpretable variables to quantify city migration properties, which are co-optimized with city representations via tailor-designed contrastive losses. Extensive experiments on three real-world datasets demonstrate the superiority of our DHG-SIL. Notably, DHG-SIL has been deployed as a core component of a cooperative partner's intelligent human resource system, and the system supported a series of city talent attraction reports.<|reference_end|> | arxiv | @article{guo2024labor,
title={Labor Migration Modeling through Large-scale Job Query Data},
author={Zhuoning Guo, Le Zhang, Hengshu Zhu, Weijia Zhang, Hui Xiong, Hao Liu},
journal={arXiv preprint arXiv:2410.02639},
year={2024},
archivePrefix={arXiv},
eprint={2410.02639},
primaryClass={cs.LG}
} | guo2024labor |
arxiv-665143 | 2410.02640 | Diffusion-based Extreme Image Compression with Compressed Feature Initialization | <|reference_start|>Diffusion-based Extreme Image Compression with Compressed Feature Initialization: Diffusion-based extreme image compression methods have achieved impressive performance at extremely low bitrates. However, constrained by the iterative denoising process that starts from pure noise, these methods are limited in both fidelity and efficiency. To address these two issues, we present Relay Residual Diffusion Extreme Image Compression (RDEIC), which leverages compressed feature initialization and residual diffusion. Specifically, we first use the compressed latent features of the image with added noise, instead of pure noise, as the starting point to eliminate the unnecessary initial stages of the denoising process. Second, we design a novel relay residual diffusion that reconstructs the raw image by iteratively removing the added noise and the residual between the compressed and target latent features. Notably, our relay residual diffusion network seamlessly integrates pre-trained stable diffusion to leverage its robust generative capability for high-quality reconstruction. Third, we propose a fixed-step fine-tuning strategy to eliminate the discrepancy between the training and inference phases, further improving the reconstruction quality. Extensive experiments demonstrate that the proposed RDEIC achieves state-of-the-art visual quality and outperforms existing diffusion-based extreme image compression methods in both fidelity and efficiency. The source code will be provided in https://github.com/huai-chang/RDEIC.<|reference_end|> | arxiv | @article{li2024diffusion-based,
title={Diffusion-based Extreme Image Compression with Compressed Feature
Initialization},
author={Zhiyuan Li, Yanhui Zhou, Hao Wei, Chenyang Ge, Ajmal Mian},
journal={arXiv preprint arXiv:2410.02640},
year={2024},
archivePrefix={arXiv},
eprint={2410.02640},
primaryClass={eess.IV cs.CV}
} | li2024diffusion-based |
arxiv-665144 | 2410.02642 | Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers | <|reference_start|>Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers: Information retrieval (IR) systems have played a vital role in modern digital life and have cemented their continued usefulness in this new era of generative AI via retrieval-augmented generation. With strong language processing capabilities and remarkable versatility, large language models (LLMs) have become popular choices for zero-shot re-ranking in IR systems. So far, LLM-based re-ranking methods rely on strong generative capabilities, which restricts their use to either specialized or powerful proprietary models. Given these restrictions, we ask: is autoregressive generation necessary and optimal for LLMs to perform re-ranking? We hypothesize that there are abundant signals relevant to re-ranking within LLMs that might not be used to their full potential via generation. To more directly leverage such signals, we propose in-context re-ranking (ICR), a novel method that leverages the change in attention pattern caused by the search query for accurate and efficient re-ranking. To mitigate the intrinsic biases in LLMs, we propose a calibration method using a content-free query. Due to the absence of generation, ICR only requires two ($O(1)$) forward passes to re-rank $N$ documents, making it substantially more efficient than generative re-ranking methods that require at least $O(N)$ forward passes. Our novel design also enables ICR to be applied to any LLM without specialized training while guaranteeing a well-formed ranking. Extensive experiments with two popular open-weight LLMs on standard single-hop and multi-hop information retrieval benchmarks show that ICR outperforms RankGPT while cutting the latency by more than 60% in practice. Through detailed analyses, we show that ICR's performance is specially strong on tasks that require more complex re-ranking signals. Our findings call for further exploration on novel ways of utilizing open-weight LLMs beyond text generation.<|reference_end|> | arxiv | @article{chen2024attention,
title={Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers},
author={Shijie Chen, Bernal Jim'enez Guti'errez, Yu Su},
journal={arXiv preprint arXiv:2410.02642},
year={2024},
archivePrefix={arXiv},
eprint={2410.02642},
primaryClass={cs.CL cs.IR}
} | chen2024attention |
arxiv-665145 | 2410.02643 | Why Sample Space Matters: Keyframe Sampling Optimization for LiDAR-based Place Recognition | <|reference_start|>Why Sample Space Matters: Keyframe Sampling Optimization for LiDAR-based Place Recognition: Recent advances in robotics are pushing real-world autonomy, enabling robots to perform long-term and large-scale missions. A crucial component for successful missions is the incorporation of loop closures through place recognition, which effectively mitigates accumulated pose estimation drift. Despite computational advancements, optimizing performance for real-time deployment remains challenging, especially in resource-constrained mobile robots and multi-robot systems since, conventional keyframe sampling practices in place recognition often result in retaining redundant information or overlooking relevant data, as they rely on fixed sampling intervals or work directly in the 3D space instead of the feature space. To address these concerns, we introduce the concept of sample space in place recognition and demonstrate how different sampling techniques affect the query process and overall performance. We then present a novel keyframe sampling approach for LiDAR-based place recognition, which focuses on redundancy minimization and information preservation in the hyper-dimensional descriptor space. This approach is applicable to both learning-based and handcrafted descriptors, and through the experimental validation across multiple datasets and descriptor frameworks, we demonstrate the effectiveness of our proposed method, showing it can jointly minimize redundancy and preserve essential information in real-time. The proposed approach maintains robust performance across various datasets without requiring parameter tuning, contributing to more efficient and reliable place recognition for a wide range of robotic applications.<|reference_end|> | arxiv | @article{stathoulopoulos2024why,
title={Why Sample Space Matters: Keyframe Sampling Optimization for LiDAR-based
Place Recognition},
author={Nikolaos Stathoulopoulos, Vidya Sumathy, Christoforos Kanellakis and
George Nikolakopoulos},
journal={arXiv preprint arXiv:2410.02643},
year={2024},
archivePrefix={arXiv},
eprint={2410.02643},
primaryClass={cs.RO cs.CV}
} | stathoulopoulos2024why |
arxiv-665146 | 2410.02644 | Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents | <|reference_start|>Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents: Although LLM-based agents, powered by Large Language Models (LLMs), can use external tools and memory mechanisms to solve complex real-world tasks, they may also introduce critical security vulnerabilities. However, the existing literature does not comprehensively evaluate attacks and defenses against LLM-based agents. To address this, we introduce Agent Security Bench (ASB), a comprehensive framework designed to formalize, benchmark, and evaluate the attacks and defenses of LLM-based agents, including 10 scenarios (e.g., e-commerce, autonomous driving, finance), 10 agents targeting the scenarios, over 400 tools, 23 different types of attack/defense methods, and 8 evaluation metrics. Based on ASB, we benchmark 10 prompt injection attacks, a memory poisoning attack, a novel Plan-of-Thought backdoor attack, a mixed attack, and 10 corresponding defenses across 13 LLM backbones with nearly 90,000 testing cases in total. Our benchmark results reveal critical vulnerabilities in different stages of agent operation, including system prompt, user prompt handling, tool usage, and memory retrieval, with the highest average attack success rate of 84.30\%, but limited effectiveness shown in current defenses, unveiling important works to be done in terms of agent security for the community. Our code can be found at https://github.com/agiresearch/ASB.<|reference_end|> | arxiv | @article{zhang2024agent,
title={Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and
Defenses in LLM-based Agents},
author={Hanrong Zhang, Jingyuan Huang, Kai Mei, Yifei Yao, Zhenting Wang,
Chenlu Zhan, Hongwei Wang, Yongfeng Zhang},
journal={arXiv preprint arXiv:2410.02644},
year={2024},
archivePrefix={arXiv},
eprint={2410.02644},
primaryClass={cs.CR cs.AI}
} | zhang2024agent |
arxiv-665147 | 2410.02646 | Learning 3D Perception from Others' Predictions | <|reference_start|>Learning 3D Perception from Others' Predictions: Accurate 3D object detection in real-world environments requires a huge amount of annotated data with high quality. Acquiring such data is tedious and expensive, and often needs repeated effort when a new sensor is adopted or when the detector is deployed in a new environment. We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector. For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area. This setting is label-efficient, sensor-agnostic, and communication-efficient: nearby units only need to share the predictions with the ego agent (e.g., car). Naively using the received predictions as ground-truths to train the detector for the ego car, however, leads to inferior performance. We systematically study the problem and identify viewpoint mismatches and mislocalization (due to synchronization and GPS errors) as the main causes, which unavoidably result in false positives, false negatives, and inaccurate pseudo labels. We propose a distance-based curriculum, first learning from closer units with similar viewpoints and subsequently improving the quality of other units' predictions via self-training. We further demonstrate that an effective pseudo label refinement module can be trained with a handful of annotated data, largely reducing the data quantity necessary to train an object detector. We validate our approach on the recently released real-world collaborative driving dataset, using reference cars' predictions as pseudo labels for the ego car. Extensive experiments including several scenarios (e.g., different sensors, detectors, and domains) demonstrate the effectiveness of our approach toward label-efficient learning of 3D perception from other units' predictions.<|reference_end|> | arxiv | @article{yoo2024learning,
title={Learning 3D Perception from Others' Predictions},
author={Jinsu Yoo, Zhenyang Feng, Tai-Yu Pan, Yihong Sun, Cheng Perng Phoo,
Xiangyu Chen, Mark Campbell, Kilian Q. Weinberger, Bharath Hariharan, Wei-Lun
Chao},
journal={arXiv preprint arXiv:2410.02646},
year={2024},
archivePrefix={arXiv},
eprint={2410.02646},
primaryClass={cs.CV}
} | yoo2024learning |
arxiv-665148 | 2410.02647 | Immunogenicity Prediction with Dual Attention Enables Vaccine Target Selection | <|reference_start|>Immunogenicity Prediction with Dual Attention Enables Vaccine Target Selection: Immunogenicity prediction is a central topic in reverse vaccinology for finding candidate vaccines that can trigger protective immune responses. Existing approaches typically rely on highly compressed features and simple model architectures, leading to limited prediction accuracy and poor generalizability. To address these challenges, we introduce ProVaccine, a novel deep learning solution with a dual attention mechanism that integrates pre-trained latent vector representations of protein sequences and structures. We also compile the most comprehensive immunogenicity dataset to date, encompassing over 9,500 antigen sequences, structures, and immunogenicity labels from bacteria, viruses, and tumors. Extensive experiments demonstrate that ProVaccine outperforms existing methods across a wide range of evaluation metrics. Furthermore, we establish a post-hoc validation protocol to assess the practical significance of deep learning models in tackling vaccine design challenges. Our work provides an effective tool for vaccine design and sets valuable benchmarks for future research.<|reference_end|> | arxiv | @article{li2024immunogenicity,
title={Immunogenicity Prediction with Dual Attention Enables Vaccine Target
Selection},
author={Song Li, Yang Tan, Song Ke, Liang Hong, Bingxin Zhou},
journal={arXiv preprint arXiv:2410.02647},
year={2024},
archivePrefix={arXiv},
eprint={2410.02647},
primaryClass={cs.LG cs.CL q-bio.BM}
} | li2024immunogenicity |
arxiv-665149 | 2410.02650 | Undesirable Memorization in Large Language Models: A Survey | <|reference_start|>Undesirable Memorization in Large Language Models: A Survey: While recent research increasingly showcases the remarkable capabilities of Large Language Models (LLMs), it's vital to confront their hidden pitfalls. Among these challenges, the issue of memorization stands out, posing significant ethical and legal risks. In this paper, we presents a Systematization of Knowledge (SoK) on the topic of memorization in LLMs. Memorization is the effect that a model tends to store and reproduce phrases or passages from the training data and has been shown to be the fundamental issue to various privacy and security attacks against LLMs. We begin by providing an overview of the literature on the memorization, exploring it across five key dimensions: intentionality, degree, retrievability, abstraction, and transparency. Next, we discuss the metrics and methods used to measure memorization, followed by an analysis of the factors that contribute to memorization phenomenon. We then examine how memorization manifests itself in specific model architectures and explore strategies for mitigating these effects. We conclude our overview by identifying potential research topics for the near future: to develop methods for balancing performance and privacy in LLMs, and the analysis of memorization in specific contexts, including conversational agents, retrieval-augmented generation, multilingual language models, and diffusion language models.<|reference_end|> | arxiv | @article{satvaty2024undesirable,
title={Undesirable Memorization in Large Language Models: A Survey},
author={Ali Satvaty, Suzan Verberne, Fatih Turkmen},
journal={arXiv preprint arXiv:2410.02650},
year={2024},
archivePrefix={arXiv},
eprint={2410.02650},
primaryClass={cs.CL cs.AI}
} | satvaty2024undesirable |
arxiv-665150 | 2410.02651 | CAX: Cellular Automata Accelerated in JAX | <|reference_start|>CAX: Cellular Automata Accelerated in JAX: Cellular automata have become a cornerstone for investigating emergence and self-organization across diverse scientific disciplines, spanning neuroscience, artificial life, and theoretical physics. However, the absence of a hardware-accelerated cellular automata library limits the exploration of new research directions, hinders collaboration, and impedes reproducibility. In this work, we introduce CAX (Cellular Automata Accelerated in JAX), a high-performance and flexible open-source library designed to accelerate cellular automata research. CAX offers cutting-edge performance and a modular design through a user-friendly interface, and can support both discrete and continuous cellular automata with any number of dimensions. We demonstrate CAX's performance and flexibility through a wide range of benchmarks and applications. From classic models like elementary cellular automata and Conway's Game of Life to advanced applications such as growing neural cellular automata and self-classifying MNIST digits, CAX speeds up simulations up to 2,000 times faster. Furthermore, we demonstrate CAX's potential to accelerate research by presenting a collection of three novel cellular automata experiments, each implemented in just a few lines of code thanks to the library's modular architecture. Notably, we show that a simple one-dimensional cellular automaton can outperform GPT-4 on the 1D-ARC challenge.<|reference_end|> | arxiv | @article{faldor2024cax:,
title={CAX: Cellular Automata Accelerated in JAX},
author={Maxence Faldor, Antoine Cully},
journal={arXiv preprint arXiv:2410.02651},
year={2024},
archivePrefix={arXiv},
eprint={2410.02651},
primaryClass={cs.LG cs.AI}
} | faldor2024cax: |
arxiv-665151 | 2410.02653 | Measuring and Improving Persuasiveness of Large Language Models | <|reference_start|>Measuring and Improving Persuasiveness of Large Language Models: LLMs are increasingly being used in workflows involving generating content to be consumed by humans (e.g., marketing) and also in directly interacting with humans (e.g., through chatbots). The development of such systems that are capable of generating verifiably persuasive messages presents both opportunities and challenges for society. On the one hand, such systems could positively impact domains like advertising and social good, such as addressing drug addiction, and on the other, they could be misused for spreading misinformation and shaping political opinions. To channel LLMs' impact on society, we need to develop systems to measure and benchmark their persuasiveness. With this motivation, we introduce PersuasionBench and PersuasionArena, the first large-scale benchmark and arena containing a battery of tasks to measure the persuasion ability of generative models automatically. We investigate to what extent LLMs know and leverage linguistic patterns that can help them generate more persuasive language. Our findings indicate that the persuasiveness of LLMs correlates positively with model size, but smaller models can also be made to have a higher persuasiveness than much larger models. Notably, targeted training using synthetic and natural datasets significantly enhances smaller models' persuasive capabilities, challenging scale-dependent assumptions. Our findings carry key implications for both model developers and policymakers. For instance, while the EU AI Act and California's SB-1047 aim to regulate AI models based on the number of floating point operations, we demonstrate that simple metrics like this alone fail to capture the full scope of AI's societal impact. We invite the community to explore and contribute to PersuasionArena and PersuasionBench, available at https://bit.ly/measure-persuasion, to advance our understanding of AI-driven persuasion and its societal implications.<|reference_end|> | arxiv | @article{singh2024measuring,
title={Measuring and Improving Persuasiveness of Large Language Models},
author={Somesh Singh, Yaman K Singla, Harini SI, Balaji Krishnamurthy},
journal={arXiv preprint arXiv:2410.02653},
year={2024},
archivePrefix={arXiv},
eprint={2410.02653},
primaryClass={cs.CL cs.CV}
} | singh2024measuring |
arxiv-665152 | 2410.02654 | Deconstructing Recurrence, Attention, and Gating: Investigating the transferability of Transformers and Gated Recurrent Neural Networks in forecasting of dynamical systems | <|reference_start|>Deconstructing Recurrence, Attention, and Gating: Investigating the transferability of Transformers and Gated Recurrent Neural Networks in forecasting of dynamical systems: Machine learning architectures, including transformers and recurrent neural networks (RNNs) have revolutionized forecasting in applications ranging from text processing to extreme weather. Notably, advanced network architectures, tuned for applications such as natural language processing, are transferable to other tasks such as spatiotemporal forecasting tasks. However, there is a scarcity of ablation studies to illustrate the key components that enable this forecasting accuracy. The absence of such studies, although explainable due to the associated computational cost, intensifies the belief that these models ought to be considered as black boxes. In this work, we decompose the key architectural components of the most powerful neural architectures, namely gating and recurrence in RNNs, and attention mechanisms in transformers. Then, we synthesize and build novel hybrid architectures from the standard blocks, performing ablation studies to identify which mechanisms are effective for each task. The importance of considering these components as hyper-parameters that can augment the standard architectures is exhibited on various forecasting datasets, from the spatiotemporal chaotic dynamics of the multiscale Lorenz 96 system, the Kuramoto-Sivashinsky equation, as well as standard real world time-series benchmarks. A key finding is that neural gating and attention improves the performance of all standard RNNs in most tasks, while the addition of a notion of recurrence in transformers is detrimental. Furthermore, our study reveals that a novel, sparsely used, architecture which integrates Recurrent Highway Networks with neural gating and attention mechanisms, emerges as the best performing architecture in high-dimensional spatiotemporal forecasting of dynamical systems.<|reference_end|> | arxiv | @article{heidenreich2024deconstructing,
title={Deconstructing Recurrence, Attention, and Gating: Investigating the
transferability of Transformers and Gated Recurrent Neural Networks in
forecasting of dynamical systems},
author={Hunter S. Heidenreich, Pantelis R. Vlachas, Petros Koumoutsakos},
journal={arXiv preprint arXiv:2410.02654},
year={2024},
archivePrefix={arXiv},
eprint={2410.02654},
primaryClass={cs.LG nlin.CD physics.comp-ph}
} | heidenreich2024deconstructing |
arxiv-665153 | 2410.02656 | Scalable Simulation-free Entropic Unbalanced Optimal Transport | <|reference_start|>Scalable Simulation-free Entropic Unbalanced Optimal Transport: The Optimal Transport (OT) problem investigates a transport map that connects two distributions while minimizing a given cost function. Finding such a transport map has diverse applications in machine learning, such as generative modeling and image-to-image translation. In this paper, we introduce a scalable and simulation-free approach for solving the Entropic Unbalanced Optimal Transport (EUOT) problem. We derive the dynamical form of this EUOT problem, which is a generalization of the Schr\"odinger bridges (SB) problem. Based on this, we derive dual formulation and optimality conditions of the EUOT problem from the stochastic optimal control interpretation. By leveraging these properties, we propose a simulation-free algorithm to solve EUOT, called Simulation-free EUOT (SF-EUOT). While existing SB models require expensive simulation costs during training and evaluation, our model achieves simulation-free training and one-step generation by utilizing the reciprocal property. Our model demonstrates significantly improved scalability in generative modeling and image-to-image translation tasks compared to previous SB methods.<|reference_end|> | arxiv | @article{choi2024scalable,
title={Scalable Simulation-free Entropic Unbalanced Optimal Transport},
author={Jaemoo Choi, Jaewoong Choi},
journal={arXiv preprint arXiv:2410.02656},
year={2024},
archivePrefix={arXiv},
eprint={2410.02656},
primaryClass={cs.LG cs.AI}
} | choi2024scalable |
arxiv-665154 | 2410.02657 | Hate Personified: Investigating the role of LLMs in content moderation | <|reference_start|>Hate Personified: Investigating the role of LLMs in content moderation: For subjective tasks such as hate detection, where people perceive hate differently, the Large Language Model's (LLM) ability to represent diverse groups is unclear. By including additional context in prompts, we comprehensively analyze LLM's sensitivity to geographical priming, persona attributes, and numerical information to assess how well the needs of various groups are reflected. Our findings on two LLMs, five languages, and six datasets reveal that mimicking persona-based attributes leads to annotation variability. Meanwhile, incorporating geographical signals leads to better regional alignment. We also find that the LLMs are sensitive to numerical anchors, indicating the ability to leverage community-based flagging efforts and exposure to adversaries. Our work provides preliminary guidelines and highlights the nuances of applying LLMs in culturally sensitive cases.<|reference_end|> | arxiv | @article{masud2024hate,
title={Hate Personified: Investigating the role of LLMs in content moderation},
author={Sarah Masud, Sahajpreet Singh, Viktor Hangya, Alexander Fraser, Tanmoy
Chakraborty},
journal={arXiv preprint arXiv:2410.02657},
year={2024},
archivePrefix={arXiv},
eprint={2410.02657},
primaryClass={cs.CL cs.CY}
} | masud2024hate |
arxiv-665155 | 2410.02659 | Ion-Acoustic Wave Dynamics in a Two-Fluid Plasma | <|reference_start|>Ion-Acoustic Wave Dynamics in a Two-Fluid Plasma: Plasma is a medium containing free electrons and cations, where each particle group behaves as a conducting fluid with a single velocity and temperature in the presence of electromagnetic fields. The difference in roles electrons and ions play define the two-fluid description of plasma. This paper examines ion-acoustic waves generated by the particles in both hot and cold plasma using a collisionless "Euler-Poisson" (EP) system. Employing phase-space asymptotic analysis, we establish that for specific wave speeds, EP acquires homoclinic orbits at the steady-state equilibrium and consequently, traveling waves. Combining python and Wolfram Mathematica, we captured visualizations of such behavior in one spatial dimension.<|reference_end|> | arxiv | @article{kelting2024ion-acoustic,
title={Ion-Acoustic Wave Dynamics in a Two-Fluid Plasma},
author={Emily Kelting and J. Douglas Wright},
journal={arXiv preprint arXiv:2410.02659},
year={2024},
archivePrefix={arXiv},
eprint={2410.02659},
primaryClass={physics.plasm-ph cs.NA math-ph math.DS math.MP math.NA}
} | kelting2024ion-acoustic |
arxiv-665156 | 2410.02660 | How to Train Long-Context Language Models (Effectively) | <|reference_start|>How to Train Long-Context Language Models (Effectively): We study continued training and supervised fine-tuning (SFT) of a language model (LM) to make effective use of long-context information. We first establish a reliable evaluation protocol to guide model development -- Instead of perplexity or simple needle-in-a-haystack (NIAH) tests, we use a broad set of long-context tasks, and we evaluate models after SFT with instruction data as this better reveals long-context abilities. Supported by our robust evaluations, we run thorough experiments to decide the data mix for continued pre-training, the instruction tuning dataset, and many other design choices. We find that (1) code repositories and books are excellent sources of long data, but it is crucial to combine them with high-quality short data; (2) training with a sequence length beyond the evaluation length boosts long-context performance; (3) for SFT, using only short instruction datasets yields strong performance on long-context tasks. Our final model, ProLong-8B, which is initialized from Llama-3 and trained on 40B tokens, demonstrates state-of-the-art long-context performance among similarly sized models at a length of 128K. ProLong outperforms Llama-3.18B-Instruct on the majority of long-context tasks despite having seen only 5% as many tokens during long-context training. Additionally, ProLong can effectively process up to 512K tokens, one of the longest context windows of publicly available LMs.<|reference_end|> | arxiv | @article{gao2024how,
title={How to Train Long-Context Language Models (Effectively)},
author={Tianyu Gao, Alexander Wettig, Howard Yen, Danqi Chen},
journal={arXiv preprint arXiv:2410.02660},
year={2024},
archivePrefix={arXiv},
eprint={2410.02660},
primaryClass={cs.CL cs.LG}
} | gao2024how |
arxiv-665157 | 2410.02664 | Grounded Answers for Multi-agent Decision-making Problem through Generative World Model | <|reference_start|>Grounded Answers for Multi-agent Decision-making Problem through Generative World Model: Recent progress in generative models has stimulated significant innovations in many fields, such as image generation and chatbots. Despite their success, these models often produce sketchy and misleading solutions for complex multi-agent decision-making problems because they miss the trial-and-error experience and reasoning as humans. To address this limitation, we explore a paradigm that integrates a language-guided simulator into the multi-agent reinforcement learning pipeline to enhance the generated answer. The simulator is a world model that separately learns dynamics and reward, where the dynamics model comprises an image tokenizer as well as a causal transformer to generate interaction transitions autoregressively, and the reward model is a bidirectional transformer learned by maximizing the likelihood of trajectories in the expert demonstrations under language guidance. Given an image of the current state and the task description, we use the world model to train the joint policy and produce the image sequence as the answer by running the converged policy on the dynamics model. The empirical results demonstrate that this framework can improve the answers for multi-agent decision-making problems by showing superior performance on the training and unseen tasks of the StarCraft Multi-Agent Challenge benchmark. In particular, it can generate consistent interaction sequences and explainable reward functions at interaction states, opening the path for training generative models of the future.<|reference_end|> | arxiv | @article{liu2024grounded,
title={Grounded Answers for Multi-agent Decision-making Problem through
Generative World Model},
author={Zeyang Liu, Xinrui Yang, Shiguang Sun, Long Qian, Lipeng Wan, Xingyu
Chen, Xuguang Lan},
journal={arXiv preprint arXiv:2410.02664},
year={2024},
archivePrefix={arXiv},
eprint={2410.02664},
primaryClass={cs.AI cs.MA}
} | liu2024grounded |
arxiv-665158 | 2410.02666 | AlphaIntegrator: Transformer Action Search for Symbolic Integration Proofs | <|reference_start|>AlphaIntegrator: Transformer Action Search for Symbolic Integration Proofs: We present the first correct-by-construction learning-based system for step-by-step mathematical integration. The key idea is to learn a policy, represented by a GPT transformer model, which guides the search for the right mathematical integration rule, to be carried out by a symbolic solver. Concretely, we introduce a symbolic engine with axiomatically correct actions on mathematical expressions, as well as the first dataset for step-by-step integration. Our GPT-style transformer model, trained on this synthetic data, demonstrates strong generalization by surpassing its own data generator in accuracy and efficiency, using 50% fewer search steps. Our experimental results with SoTA LLMs also demonstrate that the standard approach of fine-tuning LLMs on a set of question-answer pairs is insufficient for solving this mathematical task. This motivates the importance of discovering creative methods for combining LLMs with symbolic reasoning engines, of which our work is an instance.<|reference_end|> | arxiv | @article{ünsal2024alphaintegrator:,
title={AlphaIntegrator: Transformer Action Search for Symbolic Integration
Proofs},
author={Mert "Unsal, Timon Gehr, Martin Vechev},
journal={arXiv preprint arXiv:2410.02666},
year={2024},
archivePrefix={arXiv},
eprint={2410.02666},
primaryClass={cs.LG cs.AI cs.SC}
} | ünsal2024alphaintegrator: |
arxiv-665159 | 2410.02667 | GUD: Generation with Unified Diffusion | <|reference_start|>GUD: Generation with Unified Diffusion: Diffusion generative models transform noise into data by inverting a process that progressively adds noise to data samples. Inspired by concepts from the renormalization group in physics, which analyzes systems across different scales, we revisit diffusion models by exploring three key design aspects: 1) the choice of representation in which the diffusion process operates (e.g. pixel-, PCA-, Fourier-, or wavelet-basis), 2) the prior distribution that data is transformed into during diffusion (e.g. Gaussian with covariance $\Sigma$), and 3) the scheduling of noise levels applied separately to different parts of the data, captured by a component-wise noise schedule. Incorporating the flexibility in these choices, we develop a unified framework for diffusion generative models with greatly enhanced design freedom. In particular, we introduce soft-conditioning models that smoothly interpolate between standard diffusion models and autoregressive models (in any basis), conceptually bridging these two approaches. Our framework opens up a wide design space which may lead to more efficient training and data generation, and paves the way to novel architectures integrating different generative approaches and generation tasks.<|reference_end|> | arxiv | @article{gerdes2024gud:,
title={GUD: Generation with Unified Diffusion},
author={Mathis Gerdes, Max Welling, Miranda C. N. Cheng},
journal={arXiv preprint arXiv:2410.02667},
year={2024},
archivePrefix={arXiv},
eprint={2410.02667},
primaryClass={cs.LG hep-th stat.ML}
} | gerdes2024gud: |
arxiv-665160 | 2410.02671 | Unsupervised Point Cloud Completion through Unbalanced Optimal Transport | <|reference_start|>Unsupervised Point Cloud Completion through Unbalanced Optimal Transport: Unpaired point cloud completion explores methods for learning a completion map from unpaired incomplete and complete point cloud data. In this paper, we propose a novel approach for unpaired point cloud completion using the unbalanced optimal transport map, called Unbalanced Optimal Transport Map for Unpaired Point Cloud Completion (UOT-UPC). We demonstrate that the unpaired point cloud completion can be naturally interpreted as the Optimal Transport (OT) problem and introduce the Unbalanced Optimal Transport (UOT) approach to address the class imbalance problem, which is prevalent in unpaired point cloud completion datasets. Moreover, we analyze the appropriate cost function for unpaired completion tasks. This analysis shows that the InfoCD cost function is particularly well-suited for this task. Our model is the first attempt to leverage UOT for unpaired point cloud completion, achieving competitive or superior results on both single-category and multi-category datasets. In particular, our model is especially effective in scenarios with class imbalance, where the proportions of categories are different between the incomplete and complete point cloud datasets.<|reference_end|> | arxiv | @article{lee2024unsupervised,
title={Unsupervised Point Cloud Completion through Unbalanced Optimal Transport},
author={Taekyung Lee, Jaemoo Choi, Jaewoong Choi, Myungjoo Kang},
journal={arXiv preprint arXiv:2410.02671},
year={2024},
archivePrefix={arXiv},
eprint={2410.02671},
primaryClass={cs.CV cs.AI}
} | lee2024unsupervised |
arxiv-665161 | 2410.02673 | A Priori Error Bounds for the Approximate Deconvolution Leray Reduced Order Model | <|reference_start|>A Priori Error Bounds for the Approximate Deconvolution Leray Reduced Order Model: The approximate deconvolution Leray reduced order model (ADL-ROM) uses spatial filtering to increase the ROM stability, and approximate deconvolution to increase the ROM accuracy. In the under-resolved numerical simulation of convection-dominated flows, ADL-ROM was shown to be significantly more stable than the standard ROM, and more accurate than the Leray ROM. In this paper, we prove a priori error bounds for the approximate deconvolution operator and ADL-ROM. To our knowledge, these are the first numerical analysis results for approximate deconvolution in a ROM context. We illustrate these numerical analysis results in the numerical simulation of convection-dominated flows.<|reference_end|> | arxiv | @article{moore2024a,
title={A Priori Error Bounds for the Approximate Deconvolution Leray Reduced
Order Model},
author={Ian Moore, Anna Sanfilippo, Francesco Ballarin, Traian Iliescu},
journal={arXiv preprint arXiv:2410.02673},
year={2024},
archivePrefix={arXiv},
eprint={2410.02673},
primaryClass={math.NA cs.NA}
} | moore2024a |
arxiv-665162 | 2410.02674 | Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus | <|reference_start|>Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus: We present a dataset of 19th century American literary orthovariant tokens with a novel layer of human-annotated dialect group tags designed to serve as the basis for computational experiments exploring literarily meaningful orthographic variation. We perform an initial broad set of experiments over this dataset using both token (BERT) and character (CANINE)-level contextual language models. We find indications that the "dialect effect" produced by intentional orthographic variation employs multiple linguistic channels, and that these channels are able to be surfaced to varied degrees given particular language modelling assumptions. Specifically, we find evidence showing that choice of tokenization scheme meaningfully impact the type of orthographic information a model is able to surface.<|reference_end|> | arxiv | @article{messner2024examining,
title={Examining Language Modeling Assumptions Using an Annotated Literary
Dialect Corpus},
author={Craig Messner, Tom Lippincott},
journal={arXiv preprint arXiv:2410.02674},
year={2024},
archivePrefix={arXiv},
eprint={2410.02674},
primaryClass={cs.CL}
} | messner2024examining |
arxiv-665163 | 2410.02675 | FAN: Fourier Analysis Networks | <|reference_start|>FAN: Fourier Analysis Networks: Despite the remarkable success achieved by neural networks, particularly those represented by MLP and Transformer, we reveal that they exhibit potential flaws in the modeling and reasoning of periodicity, i.e., they tend to memorize the periodic data rather than genuinely understanding the underlying principles of periodicity. However, periodicity is a crucial trait in various forms of reasoning and generalization, underpinning predictability across natural and engineered systems through recurring patterns in observations. In this paper, we propose FAN, a novel network architecture based on Fourier Analysis, which empowers the ability to efficiently model and reason about periodic phenomena. By introducing Fourier Series, the periodicity is naturally integrated into the structure and computational processes of the neural network, thus achieving a more accurate expression and prediction of periodic patterns. As a promising substitute to multi-layer perceptron (MLP), FAN can seamlessly replace MLP in various models with fewer parameters and FLOPs. Through extensive experiments, we demonstrate the effectiveness of FAN in modeling and reasoning about periodic functions, and the superiority and generalizability of FAN across a range of real-world tasks, including symbolic formula representation, time series forecasting, and language modeling.<|reference_end|> | arxiv | @article{dong2024fan:,
title={FAN: Fourier Analysis Networks},
author={Yihong Dong, Ge Li, Yongding Tao, Xue Jiang, Kechi Zhang, Jia Li, Jing
Su, Jun Zhang, Jingjing Xu},
journal={arXiv preprint arXiv:2410.02675},
year={2024},
archivePrefix={arXiv},
eprint={2410.02675},
primaryClass={cs.LG cs.AI cs.CL}
} | dong2024fan: |
arxiv-665164 | 2410.02677 | CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs | <|reference_start|>CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs: To make large language models (LLMs) more helpful across diverse cultures, it is essential to have effective cultural knowledge benchmarks to measure and track our progress. Effective benchmarks need to be robust, diverse, and challenging. We introduce CulturalBench: a set of 1,227 human-written and human-verified questions for effectively assessing LLMs' cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru. Questions - each verified by five independent annotators - span 17 diverse topics ranging from food preferences to greeting etiquettes. We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently. We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference). Compared to human performance (92.6% accuracy), CulturalBench-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%. Moreover, we find that LLMs often struggle with tricky questions that have multiple correct answers (e.g., What utensils do the Chinese usually use?), revealing a tendency to converge to a single answer. Our results also indicate that OpenAI GPT-4o substantially outperform other proprietary and open source models in questions related to all but one region (Oceania). Nonetheless, all models consistently underperform on questions related to South America and the Middle East.<|reference_end|> | arxiv | @article{chiu2024culturalbench:,
title={CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring
the (Lack of) Cultural Knowledge of LLMs},
author={Yu Ying Chiu, Liwei Jiang, Bill Yuchen Lin, Chan Young Park, Shuyue
Stella Li, Sahithya Ravi, Mehar Bhatia, Maria Antoniak, Yulia Tsvetkov, Vered
Shwartz and Yejin Choi},
journal={arXiv preprint arXiv:2410.02677},
year={2024},
archivePrefix={arXiv},
eprint={2410.02677},
primaryClass={cs.CL cs.AI cs.LG}
} | chiu2024culturalbench: |
arxiv-665165 | 2410.02678 | Distilling an End-to-End Voice Assistant Without Instruction Training Data | <|reference_start|>Distilling an End-to-End Voice Assistant Without Instruction Training Data: Voice assistants, such as Siri and Google Assistant, typically model audio and text separately, resulting in lost speech information and increased complexity. Recent efforts to address this with end-to-end Speech Large Language Models (LLMs) trained with supervised finetuning (SFT) have led to models ``forgetting" capabilities from text-only LLMs. Our work proposes an alternative paradigm for training Speech LLMs without instruction data, using the response of a text-only LLM to transcripts as self-supervision. Importantly, this process can be performed without annotated responses. We show that our Distilled Voice Assistant (DiVA) generalizes to Spoken Question Answering, Classification, and Translation. Furthermore, we show that DiVA better meets user preferences, achieving a 72\% win rate compared with state-of-the-art models like Qwen 2 Audio, despite using $>$100x less training compute.<|reference_end|> | arxiv | @article{held2024distilling,
title={Distilling an End-to-End Voice Assistant Without Instruction Training
Data},
author={William Held, Ella Li, Michael Ryan, Weiyan Shi, Yanzhe Zhang, Diyi
Yang},
journal={arXiv preprint arXiv:2410.02678},
year={2024},
archivePrefix={arXiv},
eprint={2410.02678},
primaryClass={cs.CL cs.AI}
} | held2024distilling |
arxiv-665166 | 2410.02680 | Highly Adaptive Ridge | <|reference_start|>Highly Adaptive Ridge: In this paper we propose the Highly Adaptive Ridge (HAR): a regression method that achieves a $n^{-1/3}$ dimension-free L2 convergence rate in the class of right-continuous functions with square-integrable sectional derivatives. This is a large nonparametric function class that is particularly appropriate for tabular data. HAR is exactly kernel ridge regression with a specific data-adaptive kernel based on a saturated zero-order tensor-product spline basis expansion. We use simulation and real data to confirm our theory. We demonstrate empirical performance better than state-of-the-art algorithms for small datasets in particular.<|reference_end|> | arxiv | @article{schuler2024highly,
title={Highly Adaptive Ridge},
author={Alejandro Schuler, Alexander Hagemeister, Mark van der Laan},
journal={arXiv preprint arXiv:2410.02680},
year={2024},
archivePrefix={arXiv},
eprint={2410.02680},
primaryClass={stat.ML cs.LG}
} | schuler2024highly |
arxiv-665167 | 2410.02681 | Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models | <|reference_start|>Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models: Confidence calibration is critical for the safe deployment of machine learning models in the real world. However, such issue in vision-language models like CLIP, particularly after fine-tuning, has not been fully addressed. In this work, we demonstrate that existing prompt tuning methods usually lead to a trade-off of calibration between base and new classes: the cross-entropy loss in CoOp causes overconfidence in new classes by increasing textual label divergence, whereas the regularization of KgCoOp maintains the confidence level but results in underconfidence in base classes due to the improved accuracy. Inspired by the observations, we introduce Dynamic Outlier Regularization (DOR) to ensure the confidence calibration on both base and new classes after fine-tuning. In particular, we propose to minimize the feature deviation of novel textual labels (instead of base classes) sampled from a large vocabulary. In effect, DOR prevents the increase in textual divergence for new labels while easing restrictions on base classes. Extensive experiments demonstrate that DOR can enhance the calibration performance of current fine-tuning methods on base and new classes.<|reference_end|> | arxiv | @article{wang2024understanding,
title={Understanding and Mitigating Miscalibration in Prompt Tuning for
Vision-Language Models},
author={Shuoyuan Wang, Yixuan Li, Hongxin Wei},
journal={arXiv preprint arXiv:2410.02681},
year={2024},
archivePrefix={arXiv},
eprint={2410.02681},
primaryClass={cs.LG}
} | wang2024understanding |
arxiv-665168 | 2410.02682 | EinDecomp: Decomposition of Declaratively-Specified Machine Learning and Numerical Computations for Parallel Execution | <|reference_start|>EinDecomp: Decomposition of Declaratively-Specified Machine Learning and Numerical Computations for Parallel Execution: We consider the problem of automatically decomposing operations over tensors or arrays so that they can be executed in parallel on multiple devices. We address two, closely-linked questions. First, what programming abstraction should systems for tensor-based computing offer to enable such decompositions? Second, given that abstraction, how should such systems automatically decompose a tensor-based computation? We assert that tensor-based systems should offer a programming abstraction based on an extended Einstein summation notation, which is a fully declarative, mathematical specification for tensor computations. We show that any computation specified in the Einstein summation notation can be re-written into an equivalent tensor-relational computation, and this re-write generalizes existing notations of tensor parallelism such as "data parallel'' and "model parallel.'' We consider the algorithmic problem of optimally computing a tensor-relational decomposition of a graph of operations specified in our extended Einstein summation notation, and we experimentally show the value of the algorithm that we develop.<|reference_end|> | arxiv | @article{bourgeois2024eindecomp:,
title={EinDecomp: Decomposition of Declaratively-Specified Machine Learning and
Numerical Computations for Parallel Execution},
author={Daniel Bourgeois, Zhimin Ding, Dimitrije Jankov, Jiehui Li, Mahmoud
Sleem, Yuxin Tang, Jiawen Yao, Xinyu Yao, Chris Jermaine},
journal={arXiv preprint arXiv:2410.02682},
year={2024},
archivePrefix={arXiv},
eprint={2410.02682},
primaryClass={cs.DC}
} | bourgeois2024eindecomp: |
arxiv-665169 | 2410.02683 | DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life | <|reference_start|>DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life: As we increasingly seek guidance from LLMs for decision-making in daily life, many of these decisions are not clear-cut and depend significantly on the personal values and ethical standards of the users. We present DailyDilemmas, a dataset of 1,360 moral dilemmas encountered in everyday life. Each dilemma includes two possible actions and with each action, the affected parties and human values invoked. Based on these dilemmas, we consolidated a set of human values across everyday topics e.g., interpersonal relationships, workplace, and environmental issues. We evaluated LLMs on these dilemmas to determine what action they will take and the values represented by these actions. Then, we analyzed these values through the lens of five popular theories inspired by sociology, psychology and philosophy. These theories are: World Value Survey, Moral Foundation Theory, Maslow's Hierarchy of Needs, Aristotle's Virtues, and Plutchik Wheel of Emotion. We find that LLMs are most aligned with the self-expression over survival values in terms of World Value Survey, care over loyalty in Moral Foundation Theory. Interestingly, we find large preferences differences in models for some core values such as truthfulness e.g., Mixtral-8x7B model tends to neglect it by 9.7% while GPT-4-turbo model tends to select it by 9.4%. We also study the recent guidance released by OpenAI (ModelSpec), and Anthropic (Constitutional AI) to understand how their released principles reflect their actual value prioritization when facing nuanced moral reasoning in daily-life settings. We find that end users cannot effectively steer such prioritization using system prompts.<|reference_end|> | arxiv | @article{chiu2024dailydilemmas:,
title={DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of
Daily Life},
author={Yu Ying Chiu, Liwei Jiang and Yejin Choi},
journal={arXiv preprint arXiv:2410.02683},
year={2024},
archivePrefix={arXiv},
eprint={2410.02683},
primaryClass={cs.CL cs.AI cs.LG}
} | chiu2024dailydilemmas: |
arxiv-665170 | 2410.02684 | HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router | <|reference_start|>HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router: As Large Language Models (LLMs) grow increasingly powerful, ensuring their safety and alignment with human values remains a critical challenge. Ideally, LLMs should provide informative responses while avoiding the disclosure of harmful or sensitive information. However, current alignment approaches, which rely heavily on refusal strategies, such as training models to completely reject harmful prompts or applying coarse filters are limited by their binary nature. These methods either fully deny access to information or grant it without sufficient nuance, leading to overly cautious responses or failures to detect subtle harmful content. For example, LLMs may refuse to provide basic, public information about medication due to misuse concerns. Moreover, these refusal-based methods struggle to handle mixed-content scenarios and lack the ability to adapt to context-dependent sensitivities, which can result in over-censorship of benign content. To overcome these challenges, we introduce HiddenGuard, a novel framework for fine-grained, safe generation in LLMs. HiddenGuard incorporates Prism (rePresentation Router for In-Stream Moderation), which operates alongside the LLM to enable real-time, token-level detection and redaction of harmful content by leveraging intermediate hidden states. This fine-grained approach allows for more nuanced, context-aware moderation, enabling the model to generate informative responses while selectively redacting or replacing sensitive information, rather than outright refusal. We also contribute a comprehensive dataset with token-level fine-grained annotations of potentially harmful information across diverse contexts. Our experiments demonstrate that HiddenGuard achieves over 90% in F1 score for detecting and redacting harmful content while preserving the overall utility and informativeness of the model's responses.<|reference_end|> | arxiv | @article{mei2024hiddenguard:,
title={HiddenGuard: Fine-Grained Safe Generation with Specialized
Representation Router},
author={Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Ruibin Yuan, Xueqi
Cheng},
journal={arXiv preprint arXiv:2410.02684},
year={2024},
archivePrefix={arXiv},
eprint={2410.02684},
primaryClass={cs.CL}
} | mei2024hiddenguard: |
arxiv-665171 | 2410.02686 | Optimal continuity bound for the von Neumann entropy under energy constraints | <|reference_start|>Optimal continuity bound for the von Neumann entropy under energy constraints: Using techniques proposed in [Sason, IEEE Trans. Inf. Th. 59, 7118 (2013)] and [Becker, Datta and Jabbour, IEEE Trans. Inf. Th. 69, 4128 (2023)], and building on results from the latter, we construct a globally optimal continuity bound for the von Neumann entropy under energy constraints imposed by arbitrary Hamiltonians, satisfying the Gibbs hypothesis. In particular, this provides a precise expression for the modulus of continuity of the von Neumann entropy over the set of states with bounded energy for infinite-dimensional quantum systems. Thus, it completely solves the problem of finding an optimal continuity bound for the von Neumann entropy in this setting, which was previously known only for pairs of states which were sufficiently close to each other. This continuity bound follows from a globally optimal semicontinuity bound for the von Neumann entropy under general energy constraints, which is our main technical result.<|reference_end|> | arxiv | @article{becker2024optimal,
title={Optimal continuity bound for the von Neumann entropy under energy
constraints},
author={S.Becker, N.Datta, M.G.Jabbour and M.E.Shirokov},
journal={arXiv preprint arXiv:2410.02686},
year={2024},
archivePrefix={arXiv},
eprint={2410.02686},
primaryClass={quant-ph cs.IT math-ph math.IT math.MP}
} | becker2024optimal |
arxiv-665172 | 2410.02687 | Numerical optimal control for delay differential equations: A simultaneous approach based on linearization of the delayed state | <|reference_start|>Numerical optimal control for delay differential equations: A simultaneous approach based on linearization of the delayed state: Time delays are ubiquitous in industry, and they must be accounted for when designing control strategies. However, numerical optimal control (NOC) of delay differential equations (DDEs) is challenging because it requires specialized discretization methods and the time delays may depend on the manipulated inputs or state variables. Therefore, in this work, we propose to linearize the delayed states around the current time. This results in a set of implicit differential equations, and we compare the steady states and the corresponding stability criteria of the DDEs and the approximate system. Furthermore, we propose a simultaneous approach for NOC of DDEs based on the linearization, and we discretize the approximate system using Euler's implicit method. Finally, we present a numerical example involving a molten salt nuclear fission reactor.<|reference_end|> | arxiv | @article{ritschel2024numerical,
title={Numerical optimal control for delay differential equations: A
simultaneous approach based on linearization of the delayed state},
author={Tobias K. S. Ritschel and S{o}ren Stange},
journal={arXiv preprint arXiv:2410.02687},
year={2024},
archivePrefix={arXiv},
eprint={2410.02687},
primaryClass={math.OC cs.CE cs.SY eess.SY}
} | ritschel2024numerical |
arxiv-665173 | 2410.02688 | User-centric Immersive Communications in 6G: A Data-oriented Approach via Digital Twin | <|reference_start|>User-centric Immersive Communications in 6G: A Data-oriented Approach via Digital Twin: In this article, we present a novel user-centric service provision for immersive communications (IC) in 6G to deal with the uncertainty of individual user behaviors while satisfying unique requirements on the quality of multi-sensory experience. To this end, we propose a data-oriented approach for network resource management, featuring personalized data management that can support network modeling tailored to different user demands. Our approach leverages the digital twin (DT) technique as a key enabler. Particularly, a DT is established for each user, and the data attributes in the DT are customized based on the characteristics of the user. The DT functions, corresponding to various data operations, are customized in the development, evaluation, and update of network models to meet unique user demands. A trace-driven case study demonstrates the effectiveness of our approach in achieving user-centric IC and the significance of personalized data management in 6G.<|reference_end|> | arxiv | @article{zhou2024user-centric,
title={User-centric Immersive Communications in 6G: A Data-oriented Approach
via Digital Twin},
author={Conghao Zhou, Shisheng Hu, Jie Gao, Xinyu Huang, Weihua Zhuang, Xuemin
Shen},
journal={arXiv preprint arXiv:2410.02688},
year={2024},
archivePrefix={arXiv},
eprint={2410.02688},
primaryClass={cs.NI cs.AI}
} | zhou2024user-centric |
arxiv-665174 | 2410.02691 | On the Proper Treatment of Tokenization in Psycholinguistics | <|reference_start|>On the Proper Treatment of Tokenization in Psycholinguistics: Language models are widely used in computational psycholinguistics to test theories that relate the negative log probability (the surprisal) of a region of interest (a substring of characters) under a language model to its cognitive cost experienced by readers, as operationalized, for example, by gaze duration on the region. However, the application of modern language models to psycholinguistic studies is complicated by the practice of using tokenization as an intermediate step in training a model. Doing so results in a language model over token strings rather than one over character strings. Vexingly, regions of interest are generally misaligned with these token strings. The paper argues that token-level language models should be (approximately) marginalized into character-level language models before they are used in psycholinguistic studies to compute the surprisal of a region of interest; then, the marginalized character-level language model can be used to compute the surprisal of an arbitrary character substring, which we term a focal area, that the experimenter may wish to use as a predictor. Our proposal of marginalizing a token-level model into a character-level one solves this misalignment issue independently of the tokenization scheme. Empirically, we discover various focal areas whose surprisal is a better psychometric predictor than the surprisal of the region of interest itself.<|reference_end|> | arxiv | @article{giulianelli2024on,
title={On the Proper Treatment of Tokenization in Psycholinguistics},
author={Mario Giulianelli, Luca Malagutti, Juan Luis Gastaldi, Brian DuSell,
Tim Vieira, Ryan Cotterell},
journal={arXiv preprint arXiv:2410.02691},
year={2024},
archivePrefix={arXiv},
eprint={2410.02691},
primaryClass={cs.CL}
} | giulianelli2024on |
arxiv-665175 | 2410.02693 | Discovering Clues of Spoofed LM Watermarks | <|reference_start|>Discovering Clues of Spoofed LM Watermarks: LLM watermarks stand out as a promising way to attribute ownership of LLM-generated text. One threat to watermark credibility comes from spoofing attacks, where an unauthorized third party forges the watermark, enabling it to falsely attribute arbitrary texts to a particular LLM. While recent works have demonstrated that state-of-the-art schemes are in fact vulnerable to spoofing, they lack deeper qualitative analysis of the texts produced by spoofing methods. In this work, we for the first time reveal that there are observable differences between genuine and spoofed watermark texts. Namely, we show that regardless of their underlying approach, all current spoofing methods consistently leave observable artifacts in spoofed texts, indicative of watermark forgery. We build upon these findings to propose rigorous statistical tests that reliably reveal the presence of such artifacts, effectively discovering that a watermark was spoofed. Our experimental evaluation shows high test power across all current spoofing methods, providing insights into their fundamental limitations, and suggesting a way to mitigate this threat.<|reference_end|> | arxiv | @article{gloaguen2024discovering,
title={Discovering Clues of Spoofed LM Watermarks},
author={Thibaud Gloaguen, Nikola Jovanovi'c, Robin Staab, Martin Vechev},
journal={arXiv preprint arXiv:2410.02693},
year={2024},
archivePrefix={arXiv},
eprint={2410.02693},
primaryClass={cs.CR cs.AI cs.LG}
} | gloaguen2024discovering |
arxiv-665176 | 2410.02694 | HELMET: How to Evaluate Long-Context Language Models Effectively and Thoroughly | <|reference_start|>HELMET: How to Evaluate Long-Context Language Models Effectively and Thoroughly: There have been many benchmarks for evaluating long-context language models (LCLMs), but developers often rely on synthetic tasks like needle-in-a-haystack (NIAH) or arbitrary subsets of tasks. It remains unclear whether they translate to the diverse downstream applications of LCLMs, and the inconsistency further complicates model comparison. We investigate the underlying reasons behind current practices and find that existing benchmarks often provide noisy signals due to low coverage of applications, insufficient lengths, unreliable metrics, and incompatibility with base models. In this work, we present HELMET (How to Evaluate Long-context Models Effectively and Thoroughly), a comprehensive benchmark encompassing seven diverse, application-centric categories. We also address many issues in previous benchmarks by adding controllable lengths up to 128k tokens, model-based evaluation for reliable metrics, and few-shot prompting for robustly evaluating base models. Consequently, we demonstrate that HELMET offers more reliable and consistent rankings of frontier LCLMs. Through a comprehensive study of 51 LCLMs, we find that (1) synthetic tasks like NIAH are not good predictors of downstream performance; (2) the diverse categories in HELMET exhibit distinct trends and low correlation with each other; and (3) while most LCLMs achieve perfect NIAH scores, open-source models significantly lag behind closed ones when the task requires full-context reasoning or following complex instructions -- the gap widens with increased lengths. Finally, we recommend using our RAG tasks for fast model development, as they are easy to run and more predictive of other downstream performance; ultimately, we advocate for a holistic evaluation across diverse tasks.<|reference_end|> | arxiv | @article{yen2024helmet:,
title={HELMET: How to Evaluate Long-Context Language Models Effectively and
Thoroughly},
author={Howard Yen, Tianyu Gao, Minmin Hou, Ke Ding, Daniel Fleischer, Peter
Izsak, Moshe Wasserblat, Danqi Chen},
journal={arXiv preprint arXiv:2410.02694},
year={2024},
archivePrefix={arXiv},
eprint={2410.02694},
primaryClass={cs.CL cs.AI}
} | yen2024helmet: |
arxiv-665177 | 2410.02695 | Fractional list packing for layered graphs | <|reference_start|>Fractional list packing for layered graphs: The fractional list packing number $\chi_{\ell}^{\bullet}(G)$ of a graph $G$ is a graph invariant that has recently arisen from the study of disjoint list-colourings. It measures how large the lists of a list-assignment $L:V(G)\rightarrow 2^{\mathbb{N}}$ need to be to ensure the existence of a `perfectly balanced' probability distribution on proper $L$-colourings, i.e., such that at every vertex $v$, every colour appears with equal probability $1/|L(v)|$. In this work we give various bounds on $\chi_{\ell}^{\bullet}(G)$, which admit strengthenings for correspondence and local-degree versions. As a corollary, we improve theorems on the related notion of flexible list colouring. In particular we study Cartesian products and $d$-degenerate graphs, and we prove that $\chi_{\ell}^{\bullet}(G)$ is bounded from above by the pathwidth of $G$ plus one. The correspondence analogue of the latter is false for treewidth instead of pathwidth.<|reference_end|> | arxiv | @article{cambie2024fractional,
title={Fractional list packing for layered graphs},
author={Stijn Cambie, Wouter Cames van Batenburg},
journal={arXiv preprint arXiv:2410.02695},
year={2024},
archivePrefix={arXiv},
eprint={2410.02695},
primaryClass={math.CO cs.DM}
} | cambie2024fractional |
arxiv-665178 | 2410.02698 | Lie Algebra Canonicalization: Equivariant Neural Operators under arbitrary Lie Groups | <|reference_start|>Lie Algebra Canonicalization: Equivariant Neural Operators under arbitrary Lie Groups: The quest for robust and generalizable machine learning models has driven recent interest in exploiting symmetries through equivariant neural networks. In the context of PDE solvers, recent works have shown that Lie point symmetries can be a useful inductive bias for Physics-Informed Neural Networks (PINNs) through data and loss augmentation. Despite this, directly enforcing equivariance within the model architecture for these problems remains elusive. This is because many PDEs admit non-compact symmetry groups, oftentimes not studied beyond their infinitesimal generators, making them incompatible with most existing equivariant architectures. In this work, we propose Lie aLgebrA Canonicalization (LieLAC), a novel approach that exploits only the action of infinitesimal generators of the symmetry group, circumventing the need for knowledge of the full group structure. To achieve this, we address existing theoretical issues in the canonicalization literature, establishing connections with frame averaging in the case of continuous non-compact groups. Operating within the framework of canonicalization, LieLAC can easily be integrated with unconstrained pre-trained models, transforming inputs to a canonical form before feeding them into the existing model, effectively aligning the input for model inference according to allowed symmetries. LieLAC utilizes standard Lie group descent schemes, achieving equivariance in pre-trained models. Finally, we showcase LieLAC's efficacy on tasks of invariant image classification and Lie point symmetry equivariant neural PDE solvers using pre-trained models.<|reference_end|> | arxiv | @article{shumaylov2024lie,
title={Lie Algebra Canonicalization: Equivariant Neural Operators under
arbitrary Lie Groups},
author={Zakhar Shumaylov, Peter Zaika, James Rowbottom, Ferdia Sherry, Melanie
Weber, Carola-Bibiane Sch"onlieb},
journal={arXiv preprint arXiv:2410.02698},
year={2024},
archivePrefix={arXiv},
eprint={2410.02698},
primaryClass={cs.LG cs.CV cs.NA math.NA}
} | shumaylov2024lie |
arxiv-665179 | 2410.02701 | Impact of a reclassification on Web of Science articles on bibliometric indicators | <|reference_start|>Impact of a reclassification on Web of Science articles on bibliometric indicators: In order to avoid the ambiguous classification of articles in multiple categories in the Web of Science and the resulting complication of bibliometric indicators, a reclassification of articles in the Web of Sciences categories was carried out according to the method of S. Milojevi\'c (2020). The higher hierarchical level from the OST classification into 11 scientific disciplines is also revised. Though in most cases articles are assigned to a subject category close to the original category, the reclassification changes the subject category of about 50% of the documents of the database. Therefore, the world distribution of disciplines and disciplinary profiles of scientific actors are modified. A sample of twenty five countries highlights the impact of the reclassification on country specialization indexes. Field-normalized indicators are also impacted. The level of changes is explored in the case of the Mean Normalized Citation Indicator (MNCS). A more in-depth analysis of the MNCS in Mathematics is carried out and reveals different strategies of countries to publish works with a mathematical background.<|reference_end|> | arxiv | @article{lahatte2024impact,
title={Impact of a reclassification on Web of Science articles on bibliometric
indicators},
author={Ag'enor Lahatte and 'Elisabeth de Turckheim},
journal={arXiv preprint arXiv:2410.02701},
year={2024},
archivePrefix={arXiv},
eprint={2410.02701},
primaryClass={cs.DL}
} | lahatte2024impact |
arxiv-665180 | 2410.02703 | Selective Attention Improves Transformer | <|reference_start|>Selective Attention Improves Transformer: Unneeded elements in the attention's context degrade performance. We introduce Selective Attention, a simple parameter-free change to the standard attention mechanism which reduces attention to unneeded elements. Selective attention improves language modeling performance in a variety of model sizes and context lengths. For example, a range of transformers trained with the language modeling objective on C4 with selective attention perform equivalently to standard transformers with ~2X more heads and parameters in their attention modules. Selective attention also allows decreasing the size of the attention's context buffer, leading to meaningful reductions in the memory and compute requirements during inference. For example, transformers with 100M parameters trained on C4 with context sizes of 512, 1,024, and 2,048 need 16X, 25X, and 47X less memory for their attention module, respectively, when equipped with selective attention, as those without selective attention, with the same validation perplexity.<|reference_end|> | arxiv | @article{leviathan2024selective,
title={Selective Attention Improves Transformer},
author={Yaniv Leviathan, Matan Kalman, Yossi Matias},
journal={arXiv preprint arXiv:2410.02703},
year={2024},
archivePrefix={arXiv},
eprint={2410.02703},
primaryClass={cs.CL cs.AI cs.LG}
} | leviathan2024selective |
arxiv-665181 | 2410.02705 | ControlAR: Controllable Image Generation with Autoregressive Models | <|reference_start|>ControlAR: Controllable Image Generation with Autoregressive Models: Autoregressive (AR) models have reformulated image generation as next-token prediction, demonstrating remarkable potential and emerging as strong competitors to diffusion models. However, control-to-image generation, akin to ControlNet, remains largely unexplored within AR models. Although a natural approach, inspired by advancements in Large Language Models, is to tokenize control images into tokens and prefill them into the autoregressive model before decoding image tokens, it still falls short in generation quality compared to ControlNet and suffers from inefficiency. To this end, we introduce ControlAR, an efficient and effective framework for integrating spatial controls into autoregressive image generation models. Firstly, we explore control encoding for AR models and propose a lightweight control encoder to transform spatial inputs (e.g., canny edges or depth maps) into control tokens. Then ControlAR exploits the conditional decoding method to generate the next image token conditioned on the per-token fusion between control and image tokens, similar to positional encodings. Compared to prefilling tokens, using conditional decoding significantly strengthens the control capability of AR models but also maintains the model's efficiency. Furthermore, the proposed ControlAR surprisingly empowers AR models with arbitrary-resolution image generation via conditional decoding and specific controls. Extensive experiments can demonstrate the controllability of the proposed ControlAR for the autoregressive control-to-image generation across diverse inputs, including edges, depths, and segmentation masks. Furthermore, both quantitative and qualitative results indicate that ControlAR surpasses previous state-of-the-art controllable diffusion models, e.g., ControlNet++. Code, models, and demo will soon be available at https://github.com/hustvl/ControlAR.<|reference_end|> | arxiv | @article{li2024controlar:,
title={ControlAR: Controllable Image Generation with Autoregressive Models},
author={Zongming Li, Tianheng Cheng, Shoufa Chen, Peize Sun, Haocheng Shen,
Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang},
journal={arXiv preprint arXiv:2410.02705},
year={2024},
archivePrefix={arXiv},
eprint={2410.02705},
primaryClass={cs.CV}
} | li2024controlar: |
arxiv-665182 | 2410.02707 | LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations | <|reference_start|>LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations: Large language models (LLMs) often produce errors, including factual inaccuracies, biases, and reasoning failures, collectively referred to as "hallucinations". Recent studies have demonstrated that LLMs' internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors. In this work, we show that the internal representations of LLMs encode much more information about truthfulness than previously recognized. We first discover that the truthfulness information is concentrated in specific tokens, and leveraging this property significantly enhances error detection performance. Yet, we show that such error detectors fail to generalize across datasets, implying that -- contrary to prior claims -- truthfulness encoding is not universal but rather multifaceted. Next, we show that internal representations can also be used for predicting the types of errors the model is likely to make, facilitating the development of tailored mitigation strategies. Lastly, we reveal a discrepancy between LLMs' internal encoding and external behavior: they may encode the correct answer, yet consistently generate an incorrect one. Taken together, these insights deepen our understanding of LLM errors from the model's internal perspective, which can guide future research on enhancing error analysis and mitigation.<|reference_end|> | arxiv | @article{orgad2024llms,
title={LLMs Know More Than They Show: On the Intrinsic Representation of LLM
Hallucinations},
author={Hadas Orgad, Michael Toker, Zorik Gekhman, Roi Reichart, Idan
Szpektor, Hadas Kotek, Yonatan Belinkov},
journal={arXiv preprint arXiv:2410.02707},
year={2024},
archivePrefix={arXiv},
eprint={2410.02707},
primaryClass={cs.CL cs.AI}
} | orgad2024llms |
arxiv-665183 | 2410.02709 | Cracking the code: Lessons from 15 years of digital health IPOs for the era of AI | <|reference_start|>Cracking the code: Lessons from 15 years of digital health IPOs for the era of AI: Introduction: As digital health evolves, identifying factors that drive success is crucial. This study examines how reimbursement billing codes affect the long-term financial performance of digital health companies on U.S. stock markets, addressing the question: What separates the winners from the rest? Methods: We analyzed digital health companies that went public on U.S. stock exchanges between 2010 and 2021, offering products or services aimed at improving personal health or disease management within the U.S. market. A search using Google and existing IPO lists identified eligible companies. They were categorized based on the presence or absence of billing codes at the time of their initial public offering (IPO). Key performance indicators, including Compound Annual Growth Rate (CAGR), relative performance to benchmark indices, and market capitalization change, were compared using Mann-Whitney U and Fisher's Exact tests. Results: Of the 33 companies analyzed, 15 (45.5%) had billing codes at IPO. The median IPO price was $17.00, with no significant difference between groups. Those with billing codes were 25.5 times more likely to achieve a positive CAGR. Their median market capitalization increased 56.3%, compared to a median decline of 80.1% for those without billing codes. All five top performers, in terms of CAGR, had billing codes at IPO, whereas nine of the ten worst performers lacked them. Companies without billing codes were 16 times more likely to experience a drop in market capitalization by the study's end. Conclusion: Founders, investors, developers and analysts may have overestimated consumers' willingness to pay out-of-pocket or underestimated reimbursement complexities. As the sector evolves, especially with AI-driven solutions, stakeholders should prioritize billing codes to ensure sustainable growth, financial stability, and maximized investor returns.<|reference_end|> | arxiv | @article{jadad-garcia2024cracking,
title={Cracking the code: Lessons from 15 years of digital health IPOs for the
era of AI},
author={Tamen Jadad-Garcia, Alejandro R. Jadad},
journal={arXiv preprint arXiv:2410.02709},
year={2024},
archivePrefix={arXiv},
eprint={2410.02709},
primaryClass={q-fin.PM cs.CE}
} | jadad-garcia2024cracking |
arxiv-665184 | 2410.02710 | SteerDiff: Steering towards Safe Text-to-Image Diffusion Models | <|reference_start|>SteerDiff: Steering towards Safe Text-to-Image Diffusion Models: Text-to-image (T2I) diffusion models have drawn attention for their ability to generate high-quality images with precise text alignment. However, these models can also be misused to produce inappropriate content. Existing safety measures, which typically rely on text classifiers or ControlNet-like approaches, are often insufficient. Traditional text classifiers rely on large-scale labeled datasets and can be easily bypassed by rephrasing. As diffusion models continue to scale, fine-tuning these safeguards becomes increasingly challenging and lacks flexibility. Recent red-teaming attack researches further underscore the need for a new paradigm to prevent the generation of inappropriate content. In this paper, we introduce SteerDiff, a lightweight adaptor module designed to act as an intermediary between user input and the diffusion model, ensuring that generated images adhere to ethical and safety standards with little to no impact on usability. SteerDiff identifies and manipulates inappropriate concepts within the text embedding space to guide the model away from harmful outputs. We conduct extensive experiments across various concept unlearning tasks to evaluate the effectiveness of our approach. Furthermore, we benchmark SteerDiff against multiple red-teaming strategies to assess its robustness. Finally, we explore the potential of SteerDiff for concept forgetting tasks, demonstrating its versatility in text-conditioned image generation.<|reference_end|> | arxiv | @article{zhang2024steerdiff:,
title={SteerDiff: Steering towards Safe Text-to-Image Diffusion Models},
author={Hongxiang Zhang, Yifeng He, Hao Chen},
journal={arXiv preprint arXiv:2410.02710},
year={2024},
archivePrefix={arXiv},
eprint={2410.02710},
primaryClass={cs.CV cs.AI cs.CR}
} | zhang2024steerdiff: |
arxiv-665185 | 2410.02711 | NETS: A Non-Equilibrium Transport Sampler | <|reference_start|>NETS: A Non-Equilibrium Transport Sampler: We propose an algorithm, termed the Non-Equilibrium Transport Sampler (NETS), to sample from unnormalized probability distributions. NETS can be viewed as a variant of annealed importance sampling (AIS) based on Jarzynski's equality, in which the stochastic differential equation used to perform the non-equilibrium sampling is augmented with an additional learned drift term that lowers the impact of the unbiasing weights used in AIS. We show that this drift is the minimizer of a variety of objective functions, which can all be estimated in an unbiased fashion without backpropagating through solutions of the stochastic differential equations governing the sampling. We also prove that some these objectives control the Kullback-Leibler divergence of the estimated distribution from its target. NETS is shown to be unbiased and, in addition, has a tunable diffusion coefficient which can be adjusted post-training to maximize the effective sample size. We demonstrate the efficacy of the method on standard benchmarks, high-dimensional Gaussian mixture distributions, and a model from statistical lattice field theory, for which it surpasses the performances of related work and existing baselines.<|reference_end|> | arxiv | @article{albergo2024nets:,
title={NETS: A Non-Equilibrium Transport Sampler},
author={Michael S. Albergo and Eric Vanden-Eijnden},
journal={arXiv preprint arXiv:2410.02711},
year={2024},
archivePrefix={arXiv},
eprint={2410.02711},
primaryClass={cs.LG cond-mat.stat-mech hep-lat}
} | albergo2024nets: |
arxiv-665186 | 2410.02712 | LLaVA-Critic: Learning to Evaluate Multimodal Models | <|reference_start|>LLaVA-Critic: Learning to Evaluate Multimodal Models: We introduce LLaVA-Critic, the first open-source large multimodal model (LMM) designed as a generalist evaluator to assess performance across a wide range of multimodal tasks. LLaVA-Critic is trained using a high-quality critic instruction-following dataset that incorporates diverse evaluation criteria and scenarios. Our experiments demonstrate the model's effectiveness in two key areas: (1) LMM-as-a-Judge, where LLaVA-Critic provides reliable evaluation scores, performing on par with or surpassing GPT models on multiple evaluation benchmarks; and (2) Preference Learning, where it generates reward signals for preference learning, enhancing model alignment capabilities. This work underscores the potential of open-source LMMs in self-critique and evaluation, setting the stage for future research into scalable, superhuman alignment feedback mechanisms for LMMs.<|reference_end|> | arxiv | @article{xiong2024llava-critic:,
title={LLaVA-Critic: Learning to Evaluate Multimodal Models},
author={Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi Fan, Quanquan
Gu, Heng Huang, Chunyuan Li},
journal={arXiv preprint arXiv:2410.02712},
year={2024},
archivePrefix={arXiv},
eprint={2410.02712},
primaryClass={cs.CV cs.CL}
} | xiong2024llava-critic: |
arxiv-665187 | 2410.02713 | Video Instruction Tuning With Synthetic Data | <|reference_start|>Video Instruction Tuning With Synthetic Data: The development of video large multimodal models (LMMs) has been hindered by the difficulty of curating large amounts of high-quality raw data from the web. To address this, we propose an alternative approach by creating a high-quality synthetic dataset specifically for video instruction-following, namely LLaVA-Video-178K. This dataset includes key tasks such as detailed captioning, open-ended question-answering (QA), and multiple-choice QA. By training on this dataset, in combination with existing visual instruction tuning data, we introduce LLaVA-Video, a new video LMM. Our experiments demonstrate that LLaVA-Video achieves strong performance across various video benchmarks, highlighting the effectiveness of our dataset. We plan to release the dataset, its generation pipeline, and the model checkpoints.<|reference_end|> | arxiv | @article{zhang2024video,
title={Video Instruction Tuning With Synthetic Data},
author={Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu,
Chunyuan Li},
journal={arXiv preprint arXiv:2410.02713},
year={2024},
archivePrefix={arXiv},
eprint={2410.02713},
primaryClass={cs.CV cs.CL}
} | zhang2024video |
arxiv-665188 | 2410.02714 | AlzhiNet: Traversing from 2DCNN to 3DCNN, Towards Early Detection and Diagnosis of Alzheimer's Disease | <|reference_start|>AlzhiNet: Traversing from 2DCNN to 3DCNN, Towards Early Detection and Diagnosis of Alzheimer's Disease: Alzheimer's disease (AD) is a progressive neurodegenerative disorder with increasing prevalence among the aging population, necessitating early and accurate diagnosis for effective disease management. In this study, we present a novel hybrid deep learning framework that integrates both 2D Convolutional Neural Networks (2D-CNN) and 3D Convolutional Neural Networks (3D-CNN), along with a custom loss function and volumetric data augmentation, to enhance feature extraction and improve classification performance in AD diagnosis. According to extensive experiments, AlzhiNet outperforms standalone 2D and 3D models, highlighting the importance of combining these complementary representations of data. The depth and quality of 3D volumes derived from the augmented 2D slices also significantly influence the model's performance. The results indicate that carefully selecting weighting factors in hybrid predictions is imperative for achieving optimal results. Our framework has been validated on the Magnetic Resonance Imaging (MRI) from Kaggle and MIRIAD datasets, obtaining accuracies of 98.9% and 99.99%, respectively, with an AUC of 100%. Furthermore, AlzhiNet was studied under a variety of perturbation scenarios on the Alzheimer's Kaggle dataset, including Gaussian noise, brightness, contrast, salt and pepper noise, color jitter, and occlusion. The results obtained show that AlzhiNet is more robust to perturbations than ResNet-18, making it an excellent choice for real-world applications. This approach represents a promising advancement in the early diagnosis and treatment planning for Alzheimer's disease.<|reference_end|> | arxiv | @article{akindele2024alzhinet:,
title={AlzhiNet: Traversing from 2DCNN to 3DCNN, Towards Early Detection and
Diagnosis of Alzheimer's Disease},
author={Romoke Grace Akindele, Samuel Adebayo, Paul Shekonya Kanda, Ming Yu},
journal={arXiv preprint arXiv:2410.02714},
year={2024},
archivePrefix={arXiv},
eprint={2410.02714},
primaryClass={eess.IV cs.CV cs.LG}
} | akindele2024alzhinet: |
arxiv-665189 | 2410.02717 | Measurements with Noise: Bayesian Optimization for Co-optimizing Noise and Property Discovery in Automated Experiments | <|reference_start|>Measurements with Noise: Bayesian Optimization for Co-optimizing Noise and Property Discovery in Automated Experiments: We have developed a Bayesian optimization (BO) workflow that integrates intra-step noise optimization into automated experimental cycles. Traditional BO approaches in automated experiments focus on optimizing experimental trajectories but often overlook the impact of measurement noise on data quality and cost. Our proposed framework simultaneously optimizes both the target property and the associated measurement noise by introducing time as an additional input parameter, thereby balancing the signal-to-noise ratio and experimental duration. Two approaches are explored: a reward-driven noise optimization and a double-optimization acquisition function, both enhancing the efficiency of automated workflows by considering noise and cost within the optimization process. We validate our method through simulations and real-world experiments using Piezoresponse Force Microscopy (PFM), demonstrating the successful optimization of measurement duration and property exploration. Our approach offers a scalable solution for optimizing multiple variables in automated experimental workflows, improving data quality, and reducing resource expenditure in materials science and beyond.<|reference_end|> | arxiv | @article{slautin2024measurements,
title={Measurements with Noise: Bayesian Optimization for Co-optimizing Noise
and Property Discovery in Automated Experiments},
author={Boris N. Slautin, Yu Liu, Jan Dec, Vladimir V. Shvartsman, Doru C.
Lupascu, Maxim Ziatdinov, Sergei V. Kalinin},
journal={arXiv preprint arXiv:2410.02717},
year={2024},
archivePrefix={arXiv},
eprint={2410.02717},
primaryClass={cond-mat.mtrl-sci cs.AI cs.LG}
} | slautin2024measurements |
arxiv-665190 | 2410.02718 | SynthFormer: Equivariant Pharmacophore-based Generation of Molecules for Ligand-Based Drug Design | <|reference_start|>SynthFormer: Equivariant Pharmacophore-based Generation of Molecules for Ligand-Based Drug Design: Drug discovery is a complex and resource-intensive process, with significant time and cost investments required to bring new medicines to patients. Recent advancements in generative machine learning (ML) methods offer promising avenues to accelerate early-stage drug discovery by efficiently exploring chemical space. This paper addresses the gap between in silico generative approaches and practical in vitro methodologies, highlighting the need for their integration to optimize molecule discovery. We introduce SynthFormer, a novel ML model that utilizes a 3D equivariant encoder for pharmacophores to generate fully synthesizable molecules, constructed as synthetic trees. Unlike previous methods, SynthFormer incorporates 3D information and provides synthetic paths, enhancing its ability to produce molecules with good docking scores across various proteins. Our contributions include a new methodology for efficient chemical space exploration using 3D information, a novel architecture called Synthformer for translating 3D pharmacophore representations into molecules, and a meaningful embedding space that organizes reagents for drug discovery optimization. Synthformer generates molecules that dock well and enables effective late-stage optimization restricted by synthesis paths.<|reference_end|> | arxiv | @article{jocys2024synthformer:,
title={SynthFormer: Equivariant Pharmacophore-based Generation of Molecules for
Ligand-Based Drug Design},
author={Zygimantas Jocys, Henriette M.G. Willems, Katayoun Farrahi},
journal={arXiv preprint arXiv:2410.02718},
year={2024},
archivePrefix={arXiv},
eprint={2410.02718},
primaryClass={cs.LG}
} | jocys2024synthformer: |
arxiv-665191 | 2410.02719 | UncertaintyRAG: Span-Level Uncertainty Enhanced Long-Context Modeling for Retrieval-Augmented Generation | <|reference_start|>UncertaintyRAG: Span-Level Uncertainty Enhanced Long-Context Modeling for Retrieval-Augmented Generation: We present UncertaintyRAG, a novel approach for long-context Retrieval-Augmented Generation (RAG) that utilizes Signal-to-Noise Ratio (SNR)-based span uncertainty to estimate similarity between text chunks. This span uncertainty enhances model calibration, improving robustness and mitigating semantic inconsistencies introduced by random chunking. Leveraging this insight, we propose an efficient unsupervised learning technique to train the retrieval model, alongside an effective data sampling and scaling strategy. UncertaintyRAG outperforms baselines by 2.03% on LLaMA-2-7B, achieving state-of-the-art results while using only 4% of the training data compared to other advanced open-source retrieval models under distribution shift settings. Our method demonstrates strong calibration through span uncertainty, leading to improved generalization and robustness in long-context RAG tasks. Additionally, UncertaintyRAG provides a lightweight retrieval model that can be integrated into any large language model with varying context window lengths, without the need for fine-tuning, showcasing the flexibility of our approach.<|reference_end|> | arxiv | @article{li2024uncertaintyrag:,
title={UncertaintyRAG: Span-Level Uncertainty Enhanced Long-Context Modeling
for Retrieval-Augmented Generation},
author={Zixuan Li, Jing Xiong, Fanghua Ye, Chuanyang Zheng, Xun Wu, Jianqiao
Lu, Zhongwei Wan, Xiaodan Liang, Chengming Li, Zhenan Sun, Lingpeng Kong,
Ngai Wong},
journal={arXiv preprint arXiv:2410.02719},
year={2024},
archivePrefix={arXiv},
eprint={2410.02719},
primaryClass={cs.CL}
} | li2024uncertaintyrag: |
arxiv-665192 | 2410.02720 | Curvature Diversity-Driven Deformation and Domain Alignment for Point Cloud | <|reference_start|>Curvature Diversity-Driven Deformation and Domain Alignment for Point Cloud: Unsupervised Domain Adaptation (UDA) is crucial for reducing the need for extensive manual data annotation when training deep networks on point cloud data. A significant challenge of UDA lies in effectively bridging the domain gap. To tackle this challenge, we propose \textbf{C}urvature \textbf{D}iversity-Driven \textbf{N}uclear-Norm Wasserstein \textbf{D}omain Alignment (CDND). Our approach first introduces a \textit{\textbf{Curv}ature Diversity-driven Deformation \textbf{Rec}onstruction (CurvRec)} task, which effectively mitigates the gap between the source and target domains by enabling the model to extract salient features from semantically rich regions of a given point cloud. We then propose \textit{\textbf{D}eformation-based \textbf{N}uclear-norm \textbf{W}asserstein \textbf{D}iscrepancy (D-NWD)}, which applies the Nuclear-norm Wasserstein Discrepancy to both \textit{deformed and original} data samples to align the source and target domains. Furthermore, we contribute a theoretical justification for the effectiveness of D-NWD in distribution alignment and demonstrate that it is \textit{generic} enough to be applied to \textbf{any} deformations. To validate our method, we conduct extensive experiments on two public domain adaptation datasets for point cloud classification and segmentation tasks. Empirical experiment results show that our CDND achieves state-of-the-art performance by a noticeable margin over existing approaches.<|reference_end|> | arxiv | @article{wu2024curvature,
title={Curvature Diversity-Driven Deformation and Domain Alignment for Point
Cloud},
author={Mengxi Wu, Hao Huang, Yi Fang, Mohammad Rostami},
journal={arXiv preprint arXiv:2410.02720},
year={2024},
archivePrefix={arXiv},
eprint={2410.02720},
primaryClass={cs.CV cs.AI}
} | wu2024curvature |
arxiv-665193 | 2410.02721 | Domain-Specific Retrieval-Augmented Generation Using Vector Stores, Knowledge Graphs, and Tensor Factorization | <|reference_start|>Domain-Specific Retrieval-Augmented Generation Using Vector Stores, Knowledge Graphs, and Tensor Factorization: Large Language Models (LLMs) are pre-trained on large-scale corpora and excel in numerous general natural language processing (NLP) tasks, such as question answering (QA). Despite their advanced language capabilities, when it comes to domain-specific and knowledge-intensive tasks, LLMs suffer from hallucinations, knowledge cut-offs, and lack of knowledge attributions. Additionally, fine tuning LLMs' intrinsic knowledge to highly specific domains is an expensive and time consuming process. The retrieval-augmented generation (RAG) process has recently emerged as a method capable of optimization of LLM responses, by referencing them to a predetermined ontology. It was shown that using a Knowledge Graph (KG) ontology for RAG improves the QA accuracy, by taking into account relevant sub-graphs that preserve the information in a structured manner. In this paper, we introduce SMART-SLIC, a highly domain-specific LLM framework, that integrates RAG with KG and a vector store (VS) that store factual domain specific information. Importantly, to avoid hallucinations in the KG, we build these highly domain-specific KGs and VSs without the use of LLMs, but via NLP, data mining, and nonnegative tensor factorization with automatic model selection. Pairing our RAG with a domain-specific: (i) KG (containing structured information), and (ii) VS (containing unstructured information) enables the development of domain-specific chat-bots that attribute the source of information, mitigate hallucinations, lessen the need for fine-tuning, and excel in highly domain-specific question answering tasks. We pair SMART-SLIC with chain-of-thought prompting agents. The framework is designed to be generalizable to adapt to any specific or specialized domain. In this paper, we demonstrate the question answering capabilities of our framework on a corpus of scientific publications on malware analysis and anomaly detection.<|reference_end|> | arxiv | @article{barron2024domain-specific,
title={Domain-Specific Retrieval-Augmented Generation Using Vector Stores,
Knowledge Graphs, and Tensor Factorization},
author={Ryan C. Barron, Ves Grantcharov, Selma Wanna, Maksim E. Eren, Manish
Bhattarai, Nicholas Solovyev, George Tompkins, Charles Nicholas, Kim {O}.
Rasmussen, Cynthia Matuszek, Boian S. Alexandrov},
journal={arXiv preprint arXiv:2410.02721},
year={2024},
archivePrefix={arXiv},
eprint={2410.02721},
primaryClass={cs.CL cs.AI cs.IR cs.SE}
} | barron2024domain-specific |
arxiv-665194 | 2410.02724 | Large Language Models as Markov Chains | <|reference_start|>Large Language Models as Markov Chains: Large language models (LLMs) have proven to be remarkably efficient, both across a wide range of natural language processing tasks and well beyond them. However, a comprehensive theoretical analysis of the origins of their impressive performance remains elusive. In this paper, we approach this challenging task by drawing an equivalence between generic autoregressive language models with vocabulary of size $T$ and context window of size $K$ and Markov chains defined on a finite state space of size $\mathcal{O}(T^K)$. We derive several surprising findings related to the existence of a stationary distribution of Markov chains that capture the inference power of LLMs, their speed of convergence to it, and the influence of the temperature on the latter. We then prove pre-training and in-context generalization bounds and show how the drawn equivalence allows us to enrich their interpretation. Finally, we illustrate our theoretical guarantees with experiments on several recent LLMs to highlight how they capture the behavior observed in practice.<|reference_end|> | arxiv | @article{zekri2024large,
title={Large Language Models as Markov Chains},
author={Oussama Zekri, Ambroise Odonnat, Abdelhakim Benechehab, Linus
Bleistein, Nicolas Boull'e, Ievgen Redko},
journal={arXiv preprint arXiv:2410.02724},
year={2024},
archivePrefix={arXiv},
eprint={2410.02724},
primaryClass={stat.ML cs.AI cs.CL cs.LG}
} | zekri2024large |
arxiv-665195 | 2410.02725 | Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation | <|reference_start|>Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation: Inference-time computation is a powerful paradigm to enhance the performance of large language models (LLMs), with Best-of-N sampling being a widely used technique. However, this method is computationally expensive, requiring both (1) an external reward model and (2) the generation of multiple samples. In this work, we introduce a new generative self-evaluation scheme designed to adaptively reduce the number of generated samples while maintaining or even improving performance. We use a generative reward model formulation, allowing the LLM to predict mid-generation the probability that restarting the generation will yield a better response. These predictions are obtained without an external reward model and can be used to decide whether or not to generate more samples, prune unpromising samples early on, or to pick the best sample. This capability is very inexpensive as it involves generating a single predefined token. Trained using a dataset constructed with real unfiltered LMSYS user prompts, Llama 3.1 8B's win rate against GPT-4 on AlpacaEval increases from 21% to 34% with 16 samples and math performance on GSM8K improves from 84% to 91%. By sampling only when the LLM determines that it is beneficial to do so and adaptively adjusting temperature annealing, we demonstrate that 74% of the improvement from using 16 samples can be achieved with only 1.2 samples on average. We further demonstrate that 50-75% of samples can be pruned early in generation with minimal degradation in performance. Overall, our methods enable more efficient and scalable compute utilization during inference for LLMs.<|reference_end|> | arxiv | @article{manvi2024adaptive,
title={Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better,
Even Mid-Generation},
author={Rohin Manvi, Anikait Singh, Stefano Ermon},
journal={arXiv preprint arXiv:2410.02725},
year={2024},
archivePrefix={arXiv},
eprint={2410.02725},
primaryClass={cs.CL cs.AI cs.LG}
} | manvi2024adaptive |
arxiv-665196 | 2410.02729 | Unified Multi-Modal Interleaved Document Representation for Information Retrieval | <|reference_start|>Unified Multi-Modal Interleaved Document Representation for Information Retrieval: Information Retrieval (IR) methods aim to identify relevant documents in response to a given query, which have gained remarkable attention due to their successful application in various natural language tasks. However, existing approaches typically consider only the textual information within the documents, which overlooks the fact that documents can contain multiple modalities, including texts, images, and tables. Further, they often segment each long document into multiple discrete passages for embedding, preventing them from capturing the overall document context and interactions between paragraphs. We argue that these two limitations lead to suboptimal document representations for retrieval. In this work, to address them, we aim to produce more comprehensive and nuanced document representations by holistically embedding documents interleaved with different modalities. Specifically, we achieve this by leveraging the capability of recent vision-language models that enable the processing and integration of text, images, and tables into a unified format and representation. Moreover, to mitigate the information loss from segmenting documents into passages, instead of representing and retrieving passages individually, we further merge the representations of segmented passages into one single document representation, while we additionally introduce a reranking strategy to decouple and identify the relevant passage within the document if necessary. Then, through extensive experiments on diverse information retrieval scenarios considering both the textual and multimodal queries, we show that our approach substantially outperforms relevant baselines, thanks to the consideration of the multimodal information interleaved within the documents in a unified way.<|reference_end|> | arxiv | @article{lee2024unified,
title={Unified Multi-Modal Interleaved Document Representation for Information
Retrieval},
author={Jaewoo Lee and Joonho Ko and Jinheon Baek and Soyeong Jeong and Sung
Ju Hwang},
journal={arXiv preprint arXiv:2410.02729},
year={2024},
archivePrefix={arXiv},
eprint={2410.02729},
primaryClass={cs.CL cs.AI cs.IR}
} | lee2024unified |
arxiv-665197 | 2410.02730 | DivScene: Benchmarking LVLMs for Object Navigation with Diverse Scenes and Objects | <|reference_start|>DivScene: Benchmarking LVLMs for Object Navigation with Diverse Scenes and Objects: Object navigation in unknown environments is crucial for deploying embodied agents in real-world applications. While we have witnessed huge progress due to large-scale scene datasets, faster simulators, and stronger models, previous studies mainly focus on limited scene types and target objects. In this paper, we study a new task of navigating to diverse target objects in a large number of scene types. To benchmark the problem, we present a large-scale scene dataset, DivScene, which contains 4,614 scenes across 81 different types. With the dataset, we build an end-to-end embodied agent, NatVLM, by fine-tuning a Large Vision Language Model (LVLM) through imitation learning. The LVLM is trained to take previous observations from the environment and generate the next actions. We also introduce CoT explanation traces of the action prediction for better performance when tuning LVLMs. Our extensive experiments find that we can build a performant LVLM-based agent through imitation learning on the shortest paths constructed by a BFS planner without any human supervision. Our agent achieves a success rate that surpasses GPT-4o by over 20%. Meanwhile, we carry out various analyses showing the generalization ability of our agent.<|reference_end|> | arxiv | @article{wang2024divscene:,
title={DivScene: Benchmarking LVLMs for Object Navigation with Diverse Scenes
and Objects},
author={Zhaowei Wang, Hongming Zhang, Tianqing Fang, Ye Tian, Yue Yang, Kaixin
Ma, Xiaoman Pan, Yangqiu Song, Dong Yu},
journal={arXiv preprint arXiv:2410.02730},
year={2024},
archivePrefix={arXiv},
eprint={2410.02730},
primaryClass={cs.CV cs.CL cs.RO}
} | wang2024divscene: |
arxiv-665198 | 2410.02732 | Custom Non-Linear Model Predictive Control for Obstacle Avoidance in Indoor and Outdoor Environments | <|reference_start|>Custom Non-Linear Model Predictive Control for Obstacle Avoidance in Indoor and Outdoor Environments: Navigating complex environments requires Unmanned Aerial Vehicles (UAVs) and autonomous systems to perform trajectory tracking and obstacle avoidance in real-time. While many control strategies have effectively utilized linear approximations, addressing the non-linear dynamics of UAV, especially in obstacle-dense environments, remains a key challenge that requires further research. This paper introduces a Non-linear Model Predictive Control (NMPC) framework for the DJI Matrice 100, addressing these challenges by using a dynamic model and B-spline interpolation for smooth reference trajectories, ensuring minimal deviation while respecting safety constraints. The framework supports various trajectory types and employs a penalty-based cost function for control accuracy in tight maneuvers. The framework utilizes CasADi for efficient real-time optimization, enabling the UAV to maintain robust operation even under tight computational constraints. Simulation and real-world indoor and outdoor experiments demonstrated the NMPC ability to adapt to disturbances, resulting in smooth, collision-free navigation.<|reference_end|> | arxiv | @article{laban2024custom,
title={Custom Non-Linear Model Predictive Control for Obstacle Avoidance in
Indoor and Outdoor Environments},
author={Lara Laban, Mariusz Wzorek, Piotr Rudol and Tommy Persson},
journal={arXiv preprint arXiv:2410.02732},
year={2024},
archivePrefix={arXiv},
eprint={2410.02732},
primaryClass={cs.RO cs.AI cs.AR cs.CE cs.SY eess.SY}
} | laban2024custom |
arxiv-665199 | 2410.02733 | Data Similarity-Based One-Shot Clustering for Multi-Task Hierarchical Federated Learning | <|reference_start|>Data Similarity-Based One-Shot Clustering for Multi-Task Hierarchical Federated Learning: We address the problem of cluster identity estimation in a hierarchical federated learning setting in which users work toward learning different tasks. To overcome the challenge of task heterogeneity, users need to be grouped in a way such that users with the same task are in the same group, conducting training together, while sharing the weights of feature extraction layers with the other groups. Toward that end, we propose a one-shot clustering algorithm that can effectively identify and group users based on their data similarity. This enables more efficient collaboration and sharing of a common layer representation within the federated learning system. Our proposed algorithm not only enhances the clustering process, but also overcomes challenges related to privacy concerns, communication overhead, and the need for prior knowledge about learning models or loss function behaviors. We validate our proposed algorithm using various datasets such as CIFAR-10 and Fashion MNIST, and show that it outperforms the baseline in terms of accuracy and variance reduction.<|reference_end|> | arxiv | @article{ali2024data,
title={Data Similarity-Based One-Shot Clustering for Multi-Task Hierarchical
Federated Learning},
author={Abdulmoneam Ali, Ahmed Arafa},
journal={arXiv preprint arXiv:2410.02733},
year={2024},
archivePrefix={arXiv},
eprint={2410.02733},
primaryClass={cs.LG cs.IT cs.NI eess.SP math.IT}
} | ali2024data |
arxiv-665200 | 2410.02735 | OOD-Chameleon: Is Algorithm Selection for OOD Generalization Learnable? | <|reference_start|>OOD-Chameleon: Is Algorithm Selection for OOD Generalization Learnable?: Out-of-distribution (OOD) generalization is challenging because distribution shifts come in many forms. A multitude of learning algorithms exist and each can improve performance in specific OOD situations. We posit that much of the challenge of OOD generalization lies in choosing the right algorithm for the right dataset. However, such algorithm selection is often elusive under complex real-world shifts. In this work, we formalize the task of algorithm selection for OOD generalization and investigate whether it could be approached by learning. We propose a solution, dubbed OOD-Chameleon that treats the task as a supervised classification over candidate algorithms. We construct a dataset of datasets to learn from, which represents diverse types, magnitudes and combinations of shifts (covariate shift, label shift, spurious correlations). We train the model to predict the relative performance of algorithms given a dataset's characteristics. This enables a priori selection of the best learning strategy, i.e. without training various models as needed with traditional model selection. Our experiments show that the adaptive selection outperforms any individual algorithm and simple selection heuristics, on unseen datasets of controllable and realistic image data. Inspecting the model shows that it learns non-trivial data/algorithms interactions, and reveals the conditions for any one algorithm to surpass another. This opens new avenues for (1) enhancing OOD generalization with existing algorithms instead of designing new ones, and (2) gaining insights into the applicability of existing algorithms with respect to datasets' properties.<|reference_end|> | arxiv | @article{jiang2024ood-chameleon:,
title={OOD-Chameleon: Is Algorithm Selection for OOD Generalization Learnable?},
author={Liangze Jiang, Damien Teney},
journal={arXiv preprint arXiv:2410.02735},
year={2024},
archivePrefix={arXiv},
eprint={2410.02735},
primaryClass={cs.LG}
} | jiang2024ood-chameleon: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.