corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-666501 | 2410.04992 | MC-QDSNN: Quantized Deep evolutionary SNN with Multi-Dendritic Compartment Neurons for Stress Detection using Physiological Signals | <|reference_start|>MC-QDSNN: Quantized Deep evolutionary SNN with Multi-Dendritic Compartment Neurons for Stress Detection using Physiological Signals: Long short-term memory (LSTM) has emerged as a definitive network for analyzing and inferring time series data. LSTM has the capability to extract spectral features and a mixture of temporal features. Due to this benefit, a similar feature extraction method is explored for the spiking counterparts targeting time-series data. Though LSTMs perform well in their spiking form, they tend to be compute and power intensive. Addressing this issue, this work proposes Multi-Compartment Leaky (MCLeaky) neuron as a viable alternative for efficient processing of time series data. The MCLeaky neuron, derived from the Leaky Integrate and Fire (LIF) neuron model, contains multiple memristive synapses interlinked to form a memory component, which emulates the human brain's Hippocampus region. The proposed MCLeaky neuron based Spiking Neural Network model and its quantized variant were benchmarked against state-of-the-art (SOTA) Spiking LSTMs to perform human stress detection, by comparing compute requirements, latency and real-world performances on unseen data with models derived through Neural Architecture Search (NAS). Results show that networks with MCLeaky activation neuron managed a superior accuracy of 98.8% to detect stress based on Electrodermal Activity (EDA) signals, better than any other investigated models, while using 20% less parameters on average. MCLeaky neuron was also tested for various signals including EDA Wrist and Chest, Temperature, ECG, and combinations of them. Quantized MCLeaky model was also derived and validated to forecast their performance on hardware architectures, which resulted in 91.84% accuracy. The neurons were evaluated for multiple modalities of data towards stress detection, which resulted in energy savings of 25.12x to 39.20x and EDP gains of 52.37x to 81.9x over ANNs, while offering a best accuracy of 98.8% when compared with the rest of the SOTA implementations.<|reference_end|> | arxiv | @article{s2024mc-qdsnn:,
title={MC-QDSNN: Quantized Deep evolutionary SNN with Multi-Dendritic
Compartment Neurons for Stress Detection using Physiological Signals},
author={Ajay B S, Phani Pavan K, Madhav Rao},
journal={arXiv preprint arXiv:2410.04992},
year={2024},
doi={10.1109/TCAD.2024.3484353},
archivePrefix={arXiv},
eprint={2410.04992},
primaryClass={cs.NE cs.LG}
} | s2024mc-qdsnn: |
arxiv-666502 | 2410.04996 | Assumption-Lean Post-Integrated Inference with Negative Control Outcomes | <|reference_start|>Assumption-Lean Post-Integrated Inference with Negative Control Outcomes: Data integration has become increasingly common in aligning multiple heterogeneous datasets. With high-dimensional outcomes, data integration methods aim to extract low-dimensional embeddings of observations to remove unwanted variations, such as batch effects and unmeasured covariates, inherent in data collected from different sources. However, multiple hypothesis testing after data integration can be substantially biased due to the data-dependent integration processes. To address this challenge, we introduce a robust post-integrated inference (PII) method that adjusts for latent heterogeneity using negative control outcomes. By leveraging causal interpretations, we derive nonparametric identification conditions that form the basis of our PII approach. Our assumption-lean semiparametric inference method extends robustness and generality to projected direct effect estimands that account for mediators, confounders, and moderators. These estimands remain statistically meaningful under model misspecifications and with error-prone embeddings. We provide deterministic quantifications of the bias of target estimands induced by estimated embeddings and finite-sample linear expansions of the estimators with uniform concentration bounds on the residuals for all outcomes. The proposed doubly robust estimators are consistent and efficient under minimal assumptions, facilitating data-adaptive estimation with machine learning algorithms. Using random forests, we evaluate empirical statistical errors in simulations and analyze single-cell CRISPR perturbed datasets with potential unmeasured confounders.<|reference_end|> | arxiv | @article{du2024assumption-lean,
title={Assumption-Lean Post-Integrated Inference with Negative Control Outcomes},
author={Jin-Hong Du and Kathryn Roeder and Larry Wasserman},
journal={arXiv preprint arXiv:2410.04996},
year={2024},
archivePrefix={arXiv},
eprint={2410.04996},
primaryClass={stat.ME cs.LG q-bio.GN stat.AP stat.ML}
} | du2024assumption-lean |
arxiv-666503 | 2410.04998 | Nonlinearity helps the convergence of the inverse Born series | <|reference_start|>Nonlinearity helps the convergence of the inverse Born series: In previous work of the authors, we investigated the Born and inverse Born series for a scalar wave equation with linear and nonlinear terms, the nonlinearity being cubic of Kerr type [8]. We reported conditions which guarantee convergence of the inverse Born series, enabling recovery of the coefficients of the linear and nonlinear terms. In this work, we show that if the coefficient of the linear term is known, an arbitrarily strong Kerr nonlinearity can be reconstructed, for sufficiently small data. Additionally, we show that similar convergence results hold for general polynomial nonlinearities. Our results are illustrated with numerical examples.<|reference_end|> | arxiv | @article{defilippis2024nonlinearity,
title={Nonlinearity helps the convergence of the inverse Born series},
author={Nicholas Defilippis, Shari Moskow, and John C. Schotland},
journal={arXiv preprint arXiv:2410.04998},
year={2024},
archivePrefix={arXiv},
eprint={2410.04998},
primaryClass={math.NA cs.NA math-ph math.MP}
} | defilippis2024nonlinearity |
arxiv-666504 | 2410.05000 | Robust Discontinuous Galerkin Methods Maintaining Physical Constraints for General Relativistic Hydrodynamics | <|reference_start|>Robust Discontinuous Galerkin Methods Maintaining Physical Constraints for General Relativistic Hydrodynamics: Simulating general relativistic hydrodynamics (GRHD) presents challenges such as handling curved spacetime, achieving high-order shock-capturing accuracy, and preserving key physical constraints (positive density, pressure, and subluminal velocity) under nonlinear coupling. This paper introduces high-order, physical-constraint-preserving, oscillation-eliminating discontinuous Galerkin (PCP-OEDG) schemes with Harten-Lax-van Leer flux for GRHD. To suppress spurious oscillations near discontinuities, we incorporate a computationally efficient oscillation-eliminating (OE) procedure based on a linear damping equation, maintaining accuracy and avoiding complex characteristic decomposition. To enhance stability and robustness, we construct PCP schemes using the W-form of GRHD equations with Cholesky decomposition of the spatial metric, addressing the non-equivalence of admissible state sets in curved spacetime. We rigorously prove the PCP property of cell averages via technical estimates and the Geometric Quasi-Linearization (GQL) approach, which transforms nonlinear constraints into linear forms. Additionally, we present provably convergent PCP iterative algorithms for robust recovery of primitive variables, ensuring physical constraints are satisfied throughout. The PCP-OEDG method is validated through extensive tests, demonstrating its robustness, accuracy, and capability to handle extreme GRHD scenarios involving strong shocks, high Lorentz factors, and intense gravitational fields.<|reference_end|> | arxiv | @article{cao2024robust,
title={Robust Discontinuous Galerkin Methods Maintaining Physical Constraints
for General Relativistic Hydrodynamics},
author={Huihui Cao, Manting Peng, Kailiang Wu},
journal={arXiv preprint arXiv:2410.05000},
year={2024},
archivePrefix={arXiv},
eprint={2410.05000},
primaryClass={math.NA astro-ph.IM cs.NA gr-qc physics.comp-ph}
} | cao2024robust |
arxiv-666505 | 2410.05001 | Quantum property testing in sparse directed graphs | <|reference_start|>Quantum property testing in sparse directed graphs: We initiate the study of quantum property testing in sparse directed graphs, and more particularly in the unidirectional model, where the algorithm is allowed to query only the outgoing edges of a vertex. In the classical unidirectional model the problem of testing $k$-star-freeness, and more generally $k$-source-subgraph-freeness, is almost maximally hard for large $k$. We prove that this problem has almost quadratic advantage in the quantum setting. Moreover, we prove that this advantage is nearly tight, by showing a quantum lower bound using the method of dual polynomials on an intermediate problem for a new, property testing version of the $k$-collision problem that was not studied before. To illustrate that not all problems in graph property testing admit such a quantum speedup, we consider the problem of $3$-colorability in the related undirected bounded-degree model, when graphs are now undirected. This problem is maximally hard to test classically, and we show that also quantumly it requires a linear number of queries.<|reference_end|> | arxiv | @article{apers2024quantum,
title={Quantum property testing in sparse directed graphs},
author={Simon Apers, Fr'ed'eric Magniez, Sayantan Sen and D'aniel Szab'o},
journal={arXiv preprint arXiv:2410.05001},
year={2024},
archivePrefix={arXiv},
eprint={2410.05001},
primaryClass={quant-ph cs.DS}
} | apers2024quantum |
arxiv-666506 | 2410.05002 | Social Network Datasets on Reddit Financial Discussion | <|reference_start|>Social Network Datasets on Reddit Financial Discussion: Stock markets are impacted by a large variety of factors including news and discussions among investors about investment opportunities. With the emergence of social media, new opportunities for having financial discussions arose. The market frenzy surrounding GameStop (GME) on the Reddit subreddit Wallstreetbets, caused financial discussion forums to receive widespread attention and it was established that Wallstreetbets played a leading role in the stock market movements of GME. Here, we present a new data set for exploring the effect of social media discussion forums on the stock market. The dataset consists of posts published on various Reddit subreddits concerning the popular meme stocks GameStop (GME), American Multi-Cinema Entertainment Holdings (AMC), and BlackBerry (BB). We document the data collection and processing steps and show that the posts and comments about these meme stocks are related to their market movements.<|reference_end|> | arxiv | @article{wang2024social,
title={Social Network Datasets on Reddit Financial Discussion},
author={Zezhong Wang, Siyang Hao, Inez Maria Zwetsloot, Simon Trimborn},
journal={arXiv preprint arXiv:2410.05002},
year={2024},
archivePrefix={arXiv},
eprint={2410.05002},
primaryClass={cs.SI}
} | wang2024social |
arxiv-666507 | 2410.05004 | Fast State Restoration in LLM Serving with HCache | <|reference_start|>Fast State Restoration in LLM Serving with HCache: The growing complexity of LLM usage today, e.g., multi-round conversation and retrieval-augmented generation (RAG), makes contextual states (i.e., KV cache) reusable across user requests. Given the capacity constraints of GPU memory, only a limited number of contexts can be cached on GPU for reusing. Existing inference systems typically evict part of the KV cache and restore it by recomputing it from the original tokens or offloading it to host storage for later retrieval, both of which introduce substantial computational or I/O overheads. We propose HCache, a novel LLM state restoration method. Its key idea is to restore LLM states from intermediate activations and thus utilize computational and I/O resources with low overhead. We enhance HCache with two techniques, including i) a bubble-free restoration scheduler that integrates resource-complementary methods to optimize the balance between computation and IO tasks; and ii) a chunk-based storage manager to address the layout mismatch issue (i.e., layer-before-token saving versus token-before-layer restoration). Our evaluations, conducted using real-world tasks, show that HCache reduces the TTFT by up to 1.93X compared to KV offload while consuming 1.92-2.40X less storage space; compared to token recomputation, HCache achieves up to 5.73X reduction in TTFT.<|reference_end|> | arxiv | @article{gao2024fast,
title={Fast State Restoration in LLM Serving with HCache},
author={Shiwei Gao,Youmin Chen,Jiwu Shu},
journal={arXiv preprint arXiv:2410.05004},
year={2024},
archivePrefix={arXiv},
eprint={2410.05004},
primaryClass={cs.DC}
} | gao2024fast |
arxiv-666508 | 2410.05006 | SkillMatch: Evaluating Self-supervised Learning of Skill Relatedness | <|reference_start|>SkillMatch: Evaluating Self-supervised Learning of Skill Relatedness: Accurately modeling the relationships between skills is a crucial part of human resources processes such as recruitment and employee development. Yet, no benchmarks exist to evaluate such methods directly. We construct and release SkillMatch, a benchmark for the task of skill relatedness, based on expert knowledge mining from millions of job ads. Additionally, we propose a scalable self-supervised learning technique to adapt a Sentence-BERT model based on skill co-occurrence in job ads. This new method greatly surpasses traditional models for skill relatedness as measured on SkillMatch. By releasing SkillMatch publicly, we aim to contribute a foundation for research towards increased accuracy and transparency of skill-based recommendation systems.<|reference_end|> | arxiv | @article{decorte2024skillmatch:,
title={SkillMatch: Evaluating Self-supervised Learning of Skill Relatedness},
author={Jens-Joris Decorte, Jeroen Van Hautte, Thomas Demeester and Chris
Develder},
journal={arXiv preprint arXiv:2410.05006},
year={2024},
archivePrefix={arXiv},
eprint={2410.05006},
primaryClass={cs.CL}
} | decorte2024skillmatch: |
arxiv-666509 | 2410.05007 | A Semantic Model for Physical Layer Deception | <|reference_start|>A Semantic Model for Physical Layer Deception: Physical layer deception (PLD) is a novel security mechanism that combines physical layer security (PLS) with deception technologies to actively defend against eavesdroppers. In this paper, we establish a novel semantic model for PLD that evaluates its performance in terms of semantic distortion. By analyzing semantic distortion at varying levels of knowledge on the receiver's part regarding the key, we derive the receiver's optimal decryption strategy, and consequently, the transmitter's optimal deception strategy. The proposed semantic model provides a more generic understanding of the PLD approach independent from coding or multiplexing schemes, and allows for efficient real-time adaptation to fading channels.<|reference_end|> | arxiv | @article{han2024a,
title={A Semantic Model for Physical Layer Deception},
author={Bin Han, Yao Zhu, Anke Schmeink, Giuseppe Caire, and Hans D. Schotten},
journal={arXiv preprint arXiv:2410.05007},
year={2024},
archivePrefix={arXiv},
eprint={2410.05007},
primaryClass={cs.IT math.IT}
} | han2024a |
arxiv-666510 | 2410.05015 | Anticipating Human Behavior for Safe Navigation and Efficient Collaborative Manipulation with Mobile Service Robots | <|reference_start|>Anticipating Human Behavior for Safe Navigation and Efficient Collaborative Manipulation with Mobile Service Robots: The anticipation of human behavior is a crucial capability for robots to interact with humans safely and efficiently. We employ a smart edge sensor network to provide global observations along with future predictions and goal information to integrate anticipatory behavior for the control of a mobile manipulation robot. We present approaches to anticipate human behavior in the context of safe navigation and a collaborative mobile manipulation task. First, we anticipate human motion by employing projections of human trajectories from smart edge sensor network observations into the planning map of a mobile robot. Second, we anticipate human intentions in a collaborative furniture-carrying task to achieve a given goal. Our experiments indicate that anticipating human behavior allows for safer navigation and more efficient collaboration. Finally, we showcase an integrated system that anticipates human behavior and collaborates with a human to achieve a target room layout, including the placement of tables and chairs.<|reference_end|> | arxiv | @article{bultmann2024anticipating,
title={Anticipating Human Behavior for Safe Navigation and Efficient
Collaborative Manipulation with Mobile Service Robots},
author={Simon Bultmann, Raphael Memmesheimer, Jan Nogga, Julian Hau, and Sven
Behnke},
journal={arXiv preprint arXiv:2410.05015},
year={2024},
archivePrefix={arXiv},
eprint={2410.05015},
primaryClass={cs.RO}
} | bultmann2024anticipating |
arxiv-666511 | 2410.05016 | T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data | <|reference_start|>T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data: Self-supervision is often used for pre-training to foster performance on a downstream task by constructing meaningful representations of samples. Self-supervised learning (SSL) generally involves generating different views of the same sample and thus requires data augmentations that are challenging to construct for tabular data. This constitutes one of the main challenges of self-supervision for structured data. In the present work, we propose a novel augmentation-free SSL method for tabular data. Our approach, T-JEPA, relies on a Joint Embedding Predictive Architecture (JEPA) and is akin to mask reconstruction in the latent space. It involves predicting the latent representation of one subset of features from the latent representation of a different subset within the same sample, thereby learning rich representations without augmentations. We use our method as a pre-training technique and train several deep classifiers on the obtained representation. Our experimental results demonstrate a substantial improvement in both classification and regression tasks, outperforming models trained directly on samples in their original data space. Moreover, T-JEPA enables some methods to consistently outperform or match the performance of traditional methods likes Gradient Boosted Decision Trees. To understand why, we extensively characterize the obtained representations and show that T-JEPA effectively identifies relevant features for downstream tasks without access to the labels. Additionally, we introduce regularization tokens, a novel regularization method critical for training of JEPA-based models on structured data.<|reference_end|> | arxiv | @article{thimonier2024t-jepa:,
title={T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data},
author={Hugo Thimonier, Jos'e Lucas De Melo Costa, Fabrice Popineau, Arpad
Rimmel, Bich-Li^en Doan},
journal={arXiv preprint arXiv:2410.05016},
year={2024},
archivePrefix={arXiv},
eprint={2410.05016},
primaryClass={cs.LG stat.ML}
} | thimonier2024t-jepa: |
arxiv-666512 | 2410.05017 | Enhanced Multi-Robot SLAM System with Cross-Validation Matching and Exponential Threshold Keyframe Selection | <|reference_start|>Enhanced Multi-Robot SLAM System with Cross-Validation Matching and Exponential Threshold Keyframe Selection: The evolving field of mobile robotics has indeed increased the demand for simultaneous localization and mapping (SLAM) systems. To augment the localization accuracy and mapping efficacy of SLAM, we refined the core module of the SLAM system. Within the feature matching phase, we introduced cross-validation matching to filter out mismatches. In the keyframe selection strategy, an exponential threshold function is constructed to quantify the keyframe selection process. Compared with a single robot, the multi-robot collaborative SLAM (CSLAM) system substantially improves task execution efficiency and robustness. By employing a centralized structure, we formulate a multi-robot SLAM system and design a coarse-to-fine matching approach for multi-map point cloud registration. Our system, built upon ORB-SLAM3, underwent extensive evaluation utilizing the TUM RGB-D, EuRoC MAV, and TUM_VI datasets. The experimental results demonstrate a significant improvement in the positioning accuracy and mapping quality of our enhanced algorithm compared to those of ORB-SLAM3, with a 12.90% reduction in the absolute trajectory error.<|reference_end|> | arxiv | @article{he2024enhanced,
title={Enhanced Multi-Robot SLAM System with Cross-Validation Matching and
Exponential Threshold Keyframe Selection},
author={Ang He, Xi-mei Wu, Xiao-bin Guo and Li-bin Liu},
journal={arXiv preprint arXiv:2410.05017},
year={2024},
archivePrefix={arXiv},
eprint={2410.05017},
primaryClass={cs.RO}
} | he2024enhanced |
arxiv-666513 | 2410.05018 | On the Biased Assessment of Expert Finding Systems | <|reference_start|>On the Biased Assessment of Expert Finding Systems: In large organisations, identifying experts on a given topic is crucial in leveraging the internal knowledge spread across teams and departments. So-called enterprise expert retrieval systems automatically discover and structure employees' expertise based on the vast amount of heterogeneous data available about them and the work they perform. Evaluating these systems requires comprehensive ground truth expert annotations, which are hard to obtain. Therefore, the annotation process typically relies on automated recommendations of knowledge areas to validate. This case study provides an analysis of how these recommendations can impact the evaluation of expert finding systems. We demonstrate on a popular benchmark that system-validated annotations lead to overestimated performance of traditional term-based retrieval models and even invalidate comparisons with more recent neural methods. We also augment knowledge areas with synonyms to uncover a strong bias towards literal mentions of their constituent words. Finally, we propose constraints to the annotation process to prevent these biased evaluations, and show that this still allows annotation suggestions of high utility. These findings should inform benchmark creation or selection for expert finding, to guarantee meaningful comparison of methods.<|reference_end|> | arxiv | @article{decorte2024on,
title={On the Biased Assessment of Expert Finding Systems},
author={Jens-Joris Decorte, Jeroen Van Hautte, Chris Develder and Thomas
Demeester},
journal={arXiv preprint arXiv:2410.05018},
year={2024},
archivePrefix={arXiv},
eprint={2410.05018},
primaryClass={cs.IR cs.CL}
} | decorte2024on |
arxiv-666514 | 2410.05019 | RelUNet: Relative Channel Fusion U-Net for Multichannel Speech Enhancement | <|reference_start|>RelUNet: Relative Channel Fusion U-Net for Multichannel Speech Enhancement: Neural multi-channel speech enhancement models, in particular those based on the U-Net architecture, demonstrate promising performance and generalization potential. These models typically encode input channels independently, and integrate the channels during later stages of the network. In this paper, we propose a novel modification of these models by incorporating relative information from the outset, where each channel is processed in conjunction with a reference channel through stacking. This input strategy exploits comparative differences to adaptively fuse information between channels, thereby capturing crucial spatial information and enhancing the overall performance. The experiments conducted on the CHiME-3 dataset demonstrate improvements in speech enhancement metrics across various architectures.<|reference_end|> | arxiv | @article{aldarmaki2024relunet:,
title={RelUNet: Relative Channel Fusion U-Net for Multichannel Speech
Enhancement},
author={Ibrahim Aldarmaki, Thamar Solorio, Bhiksha Raj, Hanan Aldarmaki},
journal={arXiv preprint arXiv:2410.05019},
year={2024},
archivePrefix={arXiv},
eprint={2410.05019},
primaryClass={cs.SD cs.LG eess.AS}
} | aldarmaki2024relunet: |
arxiv-666515 | 2410.05020 | FRIDA: Free-Rider Detection using Privacy Attacks | <|reference_start|>FRIDA: Free-Rider Detection using Privacy Attacks: Federated learning is increasingly popular as it enables multiple parties with limited datasets and resources to train a high-performing machine learning model collaboratively. However, similarly to other collaborative systems, federated learning is vulnerable to free-riders -- participants who do not contribute to the training but still benefit from the shared model. Free-riders not only compromise the integrity of the learning process but also slow down the convergence of the global model, resulting in increased costs for the honest participants. To address this challenge, we propose FRIDA: free-rider detection using privacy attacks, a framework that leverages inference attacks to detect free-riders. Unlike traditional methods that only capture the implicit effects of free-riding, FRIDA directly infers details of the underlying training datasets, revealing characteristics that indicate free-rider behaviour. Through extensive experiments, we demonstrate that membership and property inference attacks are effective for this purpose. Our evaluation shows that FRIDA outperforms state-of-the-art methods, especially in non-IID settings.<|reference_end|> | arxiv | @article{recasens2024frida:,
title={FRIDA: Free-Rider Detection using Privacy Attacks},
author={Pol G. Recasens and 'Ad'am Horv'ath and Alberto Gutierrez-Torre and
Jordi Torres and Josep Ll.Berral and Bal'azs Pej'o},
journal={arXiv preprint arXiv:2410.05020},
year={2024},
archivePrefix={arXiv},
eprint={2410.05020},
primaryClass={cs.LG cs.CR}
} | recasens2024frida: |
arxiv-666516 | 2410.05021 | DEPT: Decoupled Embeddings for Pre-training Language Models | <|reference_start|>DEPT: Decoupled Embeddings for Pre-training Language Models: Language Model pre-training benefits from a broader data mixture to enhance performance across domains and languages. However, training on such heterogeneous text corpora is complex, requiring extensive and cost-intensive efforts. Since these data sources vary in lexical, syntactic, and semantic aspects, they cause negative interference or the "curse of multilinguality". We propose a novel pre-training framework to alleviate this curse. Our method, DEPT, decouples the embedding layers from the transformer body while simultaneously training the latter in multiple contexts. DEPT enables the model to train without being bound to a shared global vocabulary. DEPT: (1) can train robustly and effectively under significant data heterogeneity, (2) reduces the parameter count of the token embeddings by up to 80% and the communication costs by 675x for billion-scale models (3) enhances model generalization and plasticity in adapting to new languages and domains, and (4) allows training with custom optimized vocabulary per data source. We prove DEPT's potential by performing the first vocabulary-agnostic federated multilingual pre-training of a 1.3 billion-parameter model across high and low-resource languages, reducing its parameter count by 409 million.<|reference_end|> | arxiv | @article{iacob2024dept:,
title={DEPT: Decoupled Embeddings for Pre-training Language Models},
author={Alex Iacob, Lorenzo Sani, Meghdad Kurmanji, William F. Shen, Xinchi
Qiu, Dongqi Cai, Yan Gao, Nicholas D. Lane},
journal={arXiv preprint arXiv:2410.05021},
year={2024},
archivePrefix={arXiv},
eprint={2410.05021},
primaryClass={cs.LG cs.CL}
} | iacob2024dept: |
arxiv-666517 | 2410.05026 | Active Fine-Tuning of Generalist Policies | <|reference_start|>Active Fine-Tuning of Generalist Policies: Pre-trained generalist policies are rapidly gaining relevance in robot learning due to their promise of fast adaptation to novel, in-domain tasks. This adaptation often relies on collecting new demonstrations for a specific task of interest and applying imitation learning algorithms, such as behavioral cloning. However, as soon as several tasks need to be learned, we must decide which tasks should be demonstrated and how often? We study this multi-task problem and explore an interactive framework in which the agent adaptively selects the tasks to be demonstrated. We propose AMF (Active Multi-task Fine-tuning), an algorithm to maximize multi-task policy performance under a limited demonstration budget by collecting demonstrations yielding the largest information gain on the expert policy. We derive performance guarantees for AMF under regularity assumptions and demonstrate its empirical effectiveness to efficiently fine-tune neural policies in complex and high-dimensional environments.<|reference_end|> | arxiv | @article{bagatella2024active,
title={Active Fine-Tuning of Generalist Policies},
author={Marco Bagatella, Jonas H"ubotter, Georg Martius, Andreas Krause},
journal={arXiv preprint arXiv:2410.05026},
year={2024},
archivePrefix={arXiv},
eprint={2410.05026},
primaryClass={cs.LG cs.RO}
} | bagatella2024active |
arxiv-666518 | 2410.05033 | Extended Functional Representation Lemma: A Tool For Privacy, Semantic Representation, Caching, and Compression Design | <|reference_start|>Extended Functional Representation Lemma: A Tool For Privacy, Semantic Representation, Caching, and Compression Design: This paper provides an overview of a problem in information-theoretic privacy mechanism design, addressing two scenarios in which private data is either observable or hidden. In each scenario, different privacy measures are used, including bounded mutual information and two types of per-letter privacy constraints. Considering the first scenario, an agent observes useful data that is correlated with private data, and wants to disclose the useful information to a user. Due to the privacy concerns, direct disclosure is prohibited. Hence, a privacy mechanism is designed to generate disclosed data which maximizes the revealed information about the useful data while satisfying a privacy constraint. In the second scenario, the agent has additionally access to the private data. We discuss how the Functional Representation Lemma, the Strong Functional Representation Lemma, and their extended versions are useful for designing low-complexity privacy mechanisms that achieve optimal privacy-utility trade-offs under certain constraints. Furthermore, another privacy design problem is presented where part of the private attribute is more private than the remaining part. Finally, we provide applications including semantic communications, caching and delivery, and compression designs, where the approach can be applied.<|reference_end|> | arxiv | @article{zamani2024extended,
title={Extended Functional Representation Lemma: A Tool For Privacy, Semantic
Representation, Caching, and Compression Design},
author={Amirreza Zamani, Mikael Skoglund},
journal={arXiv preprint arXiv:2410.05033},
year={2024},
archivePrefix={arXiv},
eprint={2410.05033},
primaryClass={cs.IT math.IT}
} | zamani2024extended |
arxiv-666519 | 2410.05037 | Improving Speaker Representations Using Contrastive Losses on Multi-scale Features | <|reference_start|>Improving Speaker Representations Using Contrastive Losses on Multi-scale Features: Speaker verification systems have seen significant advancements with the introduction of Multi-scale Feature Aggregation (MFA) architectures, such as MFA-Conformer and ECAPA-TDNN. These models leverage information from various network depths by concatenating intermediate feature maps before the pooling and projection layers, demonstrating that even shallower feature maps encode valuable speaker-specific information. Building upon this foundation, we propose a Multi-scale Feature Contrastive (MFCon) loss that directly enhances the quality of these intermediate representations. Our MFCon loss applies contrastive learning to all feature maps within the network, encouraging the model to learn more discriminative representations at the intermediate stage itself. By enforcing better feature map learning, we show that the resulting speaker embeddings exhibit increased discriminative power. Our method achieves a 9.05% improvement in equal error rate (EER) compared to the standard MFA-Conformer on the VoxCeleb-1O test set.<|reference_end|> | arxiv | @article{dixit2024improving,
title={Improving Speaker Representations Using Contrastive Losses on
Multi-scale Features},
author={Satvik Dixit, Massa Baali, Rita Singh, Bhiksha Raj},
journal={arXiv preprint arXiv:2410.05037},
year={2024},
archivePrefix={arXiv},
eprint={2410.05037},
primaryClass={cs.SD eess.AS}
} | dixit2024improving |
arxiv-666520 | 2410.05038 | GARField: Addressing the visual Sim-to-Real gap in garment manipulation with mesh-attached radiance fields | <|reference_start|>GARField: Addressing the visual Sim-to-Real gap in garment manipulation with mesh-attached radiance fields: While humans intuitively manipulate garments and other textiles items swiftly and accurately, it is a significant challenge for robots. A factor crucial to the human performance is the ability to imagine, a priori, the intended result of the manipulation intents and hence develop predictions on the garment pose. This allows us to plan from highly obstructed states, adapt our plans as we collect more information and react swiftly to unforeseen circumstances. Robots, on the other hand, struggle to establish such intuitions and form tight links between plans and observations. This can be attributed in part to the high cost of obtaining densely labelled data for textile manipulation, both in quality and quantity. The problem of data collection is a long standing issue in data-based approaches to garment manipulation. Currently, the generation of high quality and labelled garment manipulation data is mainly attempted through advanced data capture procedures that create simplified state estimations from real-world observations. In this work, however, we propose to generate real-world observations from given object states. To achieve this, we present GARField (Garment Attached Radiance Field) a differentiable rendering architecture allowing data generation from simulated states stored as triangle meshes. Code will be available on https://ddonatien.github.io/garfield-website/<|reference_end|> | arxiv | @article{delehelle2024garfield:,
title={GARField: Addressing the visual Sim-to-Real gap in garment manipulation
with mesh-attached radiance fields},
author={Donatien Delehelle, Darwin G. Caldwell and Fei Chen},
journal={arXiv preprint arXiv:2410.05038},
year={2024},
archivePrefix={arXiv},
eprint={2410.05038},
primaryClass={cs.RO cs.GR}
} | delehelle2024garfield: |
arxiv-666521 | 2410.05040 | A nodally bound-preserving discontinuous Galerkin method for the drift-diffusion equation | <|reference_start|>A nodally bound-preserving discontinuous Galerkin method for the drift-diffusion equation: In this work, we introduce and analyse discontinuous Galerkin (dG) methods for the drift-diffusion model. We explore two dG formulations: a classical interior penalty approach and a nodally bound-preserving method. Whilst the interior penalty method demonstrates well-posedness and convergence, it fails to guarantee non-negativity of the solution. To address this deficit, which is often important to ensure in applications, we employ a positivity-preserving method based on a convex subset formulation, ensuring the non-negativity of the solution at the Lagrange nodes. We validate our findings by summarising extensive numerical experiments, highlighting the novelty and effectiveness of our approach in handling the complexities of charge carrier transport.<|reference_end|> | arxiv | @article{barrenechea2024a,
title={A nodally bound-preserving discontinuous Galerkin method for the
drift-diffusion equation},
author={Gabriel R. Barrenechea and Tristan Pryer and Alex Trenam},
journal={arXiv preprint arXiv:2410.05040},
year={2024},
archivePrefix={arXiv},
eprint={2410.05040},
primaryClass={math.NA cs.NA}
} | barrenechea2024a |
arxiv-666522 | 2410.05041 | Systematic Literature Review of Vision-Based Approaches to Outdoor Livestock Monitoring with Lessons from Wildlife Studies | <|reference_start|>Systematic Literature Review of Vision-Based Approaches to Outdoor Livestock Monitoring with Lessons from Wildlife Studies: Precision livestock farming (PLF) aims to improve the health and welfare of livestock animals and farming outcomes through the use of advanced technologies. Computer vision, combined with recent advances in machine learning and deep learning artificial intelligence approaches, offers a possible solution to the PLF ideal of 24/7 livestock monitoring that helps facilitate early detection of animal health and welfare issues. However, a significant number of livestock species are raised in large outdoor habitats that pose technological challenges for computer vision approaches. This review provides a comprehensive overview of computer vision methods and open challenges in outdoor animal monitoring. We include research from both the livestock and wildlife fields in the review because of the similarities in appearance, behaviour, and habitat for many livestock and wildlife. We focus on large terrestrial mammals, such as cattle, horses, deer, goats, sheep, koalas, giraffes, and elephants. We use an image processing pipeline to frame our discussion and highlight the current capabilities and open technical challenges at each stage of the pipeline. The review found a clear trend towards the use of deep learning approaches for animal detection, counting, and multi-species classification. We discuss in detail the applicability of current vision-based methods to PLF contexts and promising directions for future research.<|reference_end|> | arxiv | @article{scott2024systematic,
title={Systematic Literature Review of Vision-Based Approaches to Outdoor
Livestock Monitoring with Lessons from Wildlife Studies},
author={Stacey D. Scott, Zayn J. Abbas, Feerass Ellid, Eli-Henry Dykhne,
Muhammad Muhaiminul Islam, Weam Ayad, Kristina Kacmorova, Dan Tulpan, Minglun
Gong},
journal={arXiv preprint arXiv:2410.05041},
year={2024},
number={CSL-2024-01},
archivePrefix={arXiv},
eprint={2410.05041},
primaryClass={cs.CV cs.LG}
} | scott2024systematic |
arxiv-666523 | 2410.05044 | PhotoReg: Photometrically Registering 3D Gaussian Splatting Models | <|reference_start|>PhotoReg: Photometrically Registering 3D Gaussian Splatting Models: Building accurate representations of the environment is critical for intelligent robots to make decisions during deployment. Advances in photorealistic environment models have enabled robots to develop hyper-realistic reconstructions, which can be used to generate images that are intuitive for human inspection. In particular, the recently introduced \ac{3DGS}, which describes the scene with up to millions of primitive ellipsoids, can be rendered in real time. \ac{3DGS} has rapidly gained prominence. However, a critical unsolved problem persists: how can we fuse multiple \ac{3DGS} into a single coherent model? Solving this problem will enable robot teams to jointly build \ac{3DGS} models of their surroundings. A key insight of this work is to leverage the {duality} between photorealistic reconstructions, which render realistic 2D images from 3D structure, and \emph{3D foundation models}, which predict 3D structure from image pairs. To this end, we develop PhotoReg, a framework to register multiple photorealistic \ac{3DGS} models with 3D foundation models. As \ac{3DGS} models are generally built from monocular camera images, they have \emph{arbitrary scale}. To resolve this, PhotoReg actively enforces scale consistency among the different \ac{3DGS} models by considering depth estimates within these models. Then, the alignment is iteratively refined with fine-grained photometric losses to produce high-quality fused \ac{3DGS} models. We rigorously evaluate PhotoReg on both standard benchmark datasets and our custom-collected datasets, including with two quadruped robots. The code is released at \url{ziweny11.github.io/photoreg}.<|reference_end|> | arxiv | @article{yuan2024photoreg:,
title={PhotoReg: Photometrically Registering 3D Gaussian Splatting Models},
author={Ziwen Yuan, Tianyi Zhang, Matthew Johnson-Roberson, Weiming Zhi},
journal={arXiv preprint arXiv:2410.05044},
year={2024},
archivePrefix={arXiv},
eprint={2410.05044},
primaryClass={cs.RO cs.AI cs.CV cs.LG}
} | yuan2024photoreg: |
arxiv-666524 | 2410.05045 | Can LLMs plan paths with extra hints from solvers? | <|reference_start|>Can LLMs plan paths with extra hints from solvers?: Large Language Models (LLMs) have shown remarkable capabilities in natural language processing, mathematical problem solving, and tasks related to program synthesis. However, their effectiveness in long-term planning and higher-order reasoning has been noted to be limited and fragile. This paper explores an approach for enhancing LLM performance in solving a classical robotic planning task by integrating solver-generated feedback. We explore four different strategies for providing feedback, including visual feedback, we utilize fine-tuning, and we evaluate the performance of three different LLMs across a 10 standard and 100 more randomly generated planning problems. Our results suggest that the solver-generated feedback improves the LLM's ability to solve the moderately difficult problems, but the harder problems still remain out of reach. The study provides detailed analysis of the effects of the different hinting strategies and the different planning tendencies of the evaluated LLMs.<|reference_end|> | arxiv | @article{wu2024can,
title={Can LLMs plan paths with extra hints from solvers?},
author={Erik Wu and Sayan Mitra},
journal={arXiv preprint arXiv:2410.05045},
year={2024},
archivePrefix={arXiv},
eprint={2410.05045},
primaryClass={cs.AI cs.CL cs.RO}
} | wu2024can |
arxiv-666525 | 2410.05046 | Named Clinical Entity Recognition Benchmark | <|reference_start|>Named Clinical Entity Recognition Benchmark: This technical report introduces a Named Clinical Entity Recognition Benchmark for evaluating language models in healthcare, addressing the crucial natural language processing (NLP) task of extracting structured information from clinical narratives to support applications like automated coding, clinical trial cohort identification, and clinical decision support. The leaderboard provides a standardized platform for assessing diverse language models, including encoder and decoder architectures, on their ability to identify and classify clinical entities across multiple medical domains. A curated collection of openly available clinical datasets is utilized, encompassing entities such as diseases, symptoms, medications, procedures, and laboratory measurements. Importantly, these entities are standardized according to the Observational Medical Outcomes Partnership (OMOP) Common Data Model, ensuring consistency and interoperability across different healthcare systems and datasets, and a comprehensive evaluation of model performance. Performance of models is primarily assessed using the F1-score, and it is complemented by various assessment modes to provide comprehensive insights into model performance. The report also includes a brief analysis of models evaluated to date, highlighting observed trends and limitations. By establishing this benchmarking framework, the leaderboard aims to promote transparency, facilitate comparative analyses, and drive innovation in clinical entity recognition tasks, addressing the need for robust evaluation methods in healthcare NLP.<|reference_end|> | arxiv | @article{abdul2024named,
title={Named Clinical Entity Recognition Benchmark},
author={Wadood M Abdul,Marco AF Pimentel, Muhammad Umar Salman, Tathagata
Raha, Cl'ement Christophe, Praveen K Kanithi, Nasir Hayat, Ronnie Rajan,
Shadab Khan},
journal={arXiv preprint arXiv:2410.05046},
year={2024},
archivePrefix={arXiv},
eprint={2410.05046},
primaryClass={cs.CL cs.AI}
} | abdul2024named |
arxiv-666526 | 2410.05047 | A test suite of prompt injection attacks for LLM-based machine translation | <|reference_start|>A test suite of prompt injection attacks for LLM-based machine translation: LLM-based NLP systems typically work by embedding their input data into prompt templates which contain instructions and/or in-context examples, creating queries which are submitted to a LLM, and then parsing the LLM response in order to generate the system outputs. Prompt Injection Attacks (PIAs) are a type of subversion of these systems where a malicious user crafts special inputs which interfere with the prompt templates, causing the LLM to respond in ways unintended by the system designer. Recently, Sun and Miceli-Barone proposed a class of PIAs against LLM-based machine translation. Specifically, the task is to translate questions from the TruthfulQA test suite, where an adversarial prompt is prepended to the questions, instructing the system to ignore the translation instruction and answer the questions instead. In this test suite, we extend this approach to all the language pairs of the WMT 2024 General Machine Translation task. Moreover, we include additional attack formats in addition to the one originally studied.<|reference_end|> | arxiv | @article{miceli-barone2024a,
title={A test suite of prompt injection attacks for LLM-based machine
translation},
author={Antonio Valerio Miceli-Barone and Zhifan Sun},
journal={arXiv preprint arXiv:2410.05047},
year={2024},
archivePrefix={arXiv},
eprint={2410.05047},
primaryClass={cs.CL}
} | miceli-barone2024a |
arxiv-666527 | 2410.05050 | FreSh: Frequency Shifting for Accelerated Neural Representation Learning | <|reference_start|>FreSh: Frequency Shifting for Accelerated Neural Representation Learning: Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs). However, MLPs are known to exhibit a low-frequency bias, limiting their ability to capture high-frequency details accurately. This limitation is typically addressed by incorporating high-frequency input embeddings or specialized activation layers. In this work, we demonstrate that these embeddings and activations are often configured with hyperparameters that perform well on average but are suboptimal for specific input signals under consideration, necessitating a costly grid search to identify optimal settings. Our key observation is that the initial frequency spectrum of an untrained model's output correlates strongly with the model's eventual performance on a given target signal. Leveraging this insight, we propose frequency shifting (or FreSh), a method that selects embedding hyperparameters to align the frequency spectrum of the model's initial output with that of the target signal. We show that this simple initialization technique improves performance across various neural representation methods and tasks, achieving results comparable to extensive hyperparameter sweeps but with only marginal computational overhead compared to training a single model with default hyperparameters.<|reference_end|> | arxiv | @article{kania2024fresh:,
title={FreSh: Frequency Shifting for Accelerated Neural Representation Learning},
author={Adam Kania, Marko Mihajlovic, Sergey Prokudin, Jacek Tabor,
Przemys{l}aw Spurek},
journal={arXiv preprint arXiv:2410.05050},
year={2024},
archivePrefix={arXiv},
eprint={2410.05050},
primaryClass={cs.LG cs.AI stat.ML}
} | kania2024fresh: |
arxiv-666528 | 2410.05051 | HE-Drive: Human-Like End-to-End Driving with Vision Language Models | <|reference_start|>HE-Drive: Human-Like End-to-End Driving with Vision Language Models: In this paper, we propose HE-Drive: the first human-like-centric end-to-end autonomous driving system to generate trajectories that are both temporally consistent and comfortable. Recent studies have shown that imitation learning-based planners and learning-based trajectory scorers can effectively generate and select accuracy trajectories that closely mimic expert demonstrations. However, such trajectory planners and scorers face the dilemma of generating temporally inconsistent and uncomfortable trajectories. To solve the above problems, Our HE-Drive first extracts key 3D spatial representations through sparse perception, which then serves as conditional inputs for a Conditional Denoising Diffusion Probabilistic Models (DDPMs)-based motion planner to generate temporal consistency multi-modal trajectories. A Vision-Language Models (VLMs)-guided trajectory scorer subsequently selects the most comfortable trajectory from these candidates to control the vehicle, ensuring human-like end-to-end driving. Experiments show that HE-Drive not only achieves state-of-the-art performance (i.e., reduces the average collision rate by 71% than VAD) and efficiency (i.e., 1.9X faster than SparseDrive) on the challenging nuScenes and OpenScene datasets but also provides the most comfortable driving experience on real-world data.For more information, visit the project website: https://jmwang0117.github.io/HE-Drive/.<|reference_end|> | arxiv | @article{wang2024he-drive:,
title={HE-Drive: Human-Like End-to-End Driving with Vision Language Models},
author={Junming Wang, Xingyu Zhang, Zebin Xing, Songen Gu, Xiaoyang Guo, Yang
Hu, Ziying Song, Qian Zhang, Xiaoxiao Long, Wei Yin},
journal={arXiv preprint arXiv:2410.05051},
year={2024},
archivePrefix={arXiv},
eprint={2410.05051},
primaryClass={cs.CV cs.RO}
} | wang2024he-drive: |
arxiv-666529 | 2410.05052 | Initialization of Large Language Models via Reparameterization to Mitigate Loss Spikes | <|reference_start|>Initialization of Large Language Models via Reparameterization to Mitigate Loss Spikes: Loss spikes, a phenomenon in which the loss value diverges suddenly, is a fundamental issue in the pre-training of large language models. This paper supposes that the non-uniformity of the norm of the parameters is one of the causes of loss spikes. Here, in training of neural networks, the scale of the gradients is required to be kept constant throughout the layers to avoid the vanishing and exploding gradients problem. However, to meet these requirements in the Transformer model, the norm of the model parameters must be non-uniform, and thus, parameters whose norm is smaller are more sensitive to the parameter update. To address this issue, we propose a novel technique, weight scaling as reparameterization (WeSaR). WeSaR introduces a gate parameter per parameter matrix and adjusts it to the value satisfying the requirements. Because of the gate parameter, WeSaR sets the norm of the original parameters uniformly, which results in stable training. Experimental results with the Transformer decoders consisting of 130 million, 1.3 billion, and 13 billion parameters showed that WeSaR stabilizes and accelerates training and that it outperformed compared methods including popular initialization methods.<|reference_end|> | arxiv | @article{nishida2024initialization,
title={Initialization of Large Language Models via Reparameterization to
Mitigate Loss Spikes},
author={Kosuke Nishida, Kyosuke Nishida, Kuniko Saito},
journal={arXiv preprint arXiv:2410.05052},
year={2024},
archivePrefix={arXiv},
eprint={2410.05052},
primaryClass={cs.CL}
} | nishida2024initialization |
arxiv-666530 | 2410.05055 | Sparse Degree Optimization for BATS Codes | <|reference_start|>Sparse Degree Optimization for BATS Codes: Batched sparse (BATS) code is a class of batched network code that can achieve a close-to-optimal rate when an optimal degree distribution is provided. We observed that most probability masses in this optimal distribution are very small, i.e., the distribution "looks" sparse. In this paper, we investigate the sparsity optimization of degree distribution for BATS codes that produces sparse degree distributions. There are many advantages to use a sparse degree distribution, say, it is robust to precision errors when sampling the degree distribution during encoding and decoding in practice. We discuss a few heuristics and also a way to obtain an exact sparsity solution. These approaches give a trade-off between computational time and achievable rate, thus give us the flexibility to adopt BATS codes in various scenarios, e.g., device with limited computational power, stable channel condition, etc.<|reference_end|> | arxiv | @article{yin2024sparse,
title={Sparse Degree Optimization for BATS Codes},
author={Hoover H. F. Yin and Jie Wang},
journal={arXiv preprint arXiv:2410.05055},
year={2024},
archivePrefix={arXiv},
eprint={2410.05055},
primaryClass={cs.IT math.IT}
} | yin2024sparse |
arxiv-666531 | 2410.05056 | Transition of $\alpha$-mixing in Random Iterations with Applications in Queuing Theory | <|reference_start|>Transition of $\alpha$-mixing in Random Iterations with Applications in Queuing Theory: Nonlinear time series models incorporating exogenous regressors provide the foundation for numerous significant models across econometrics, queuing theory, machine learning, and various other disciplines. Despite their importance, the framework for the statistical analysis of such models is still incomplete. In contrast, multiple versions of the law of large numbers and the (functional) central limit theorem have been established for weakly dependent variables. We prove the transition of mixing properties of the exogenous regressor to the response through a coupling argument, leveraging these established results. Furthermore, we study Markov chains in random environments under a suitable form of drift and minorization condition when the environment process is non-stationary, merely having favorable mixing properties. Following a novel statistical estimation theory approach and using the Cram\'er-Rao lower bound, we also establish the functional central limit theorem. Additionally, we apply our framework to single-server queuing models. Overall, these results open the door to the statistical analysis of a large class of random iterative models.<|reference_end|> | arxiv | @article{lovas2024transition,
title={Transition of $\alpha$-mixing in Random Iterations with Applications in
Queuing Theory},
author={Attila Lovas},
journal={arXiv preprint arXiv:2410.05056},
year={2024},
archivePrefix={arXiv},
eprint={2410.05056},
primaryClass={math.ST cs.AI math.PR stat.TH}
} | lovas2024transition |
arxiv-666532 | 2410.05057 | SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification | <|reference_start|>SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification: Data curation is the problem of how to collect and organize samples into a dataset that supports efficient learning. Despite the centrality of the task, little work has been devoted towards a large-scale, systematic comparison of various curation methods. In this work, we take steps towards a formal evaluation of data curation strategies and introduce SELECT, the first large-scale benchmark of curation strategies for image classification. In order to generate baseline methods for the SELECT benchmark, we create a new dataset, ImageNet++, which constitutes the largest superset of ImageNet-1K to date. Our dataset extends ImageNet with 5 new training-data shifts, each approximately the size of ImageNet-1K itself, and each assembled using a distinct curation strategy. We evaluate our data curation baselines in two ways: (i) using each training-data shift to train identical image classification models from scratch (ii) using the data itself to fit a pretrained self-supervised representation. Our findings show interesting trends, particularly pertaining to recent methods for data curation such as synthetic data generation and lookup based on CLIP embeddings. We show that although these strategies are highly competitive for certain tasks, the curation strategy used to assemble the original ImageNet-1K dataset remains the gold standard. We anticipate that our benchmark can illuminate the path for new methods to further reduce the gap. We release our checkpoints, code, documentation, and a link to our dataset at https://github.com/jimmyxu123/SELECT.<|reference_end|> | arxiv | @article{feuer2024select:,
title={SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image
Classification},
author={Benjamin Feuer, Jiawei Xu, Niv Cohen, Patrick Yubeaton, Govind Mittal,
Chinmay Hegde},
journal={arXiv preprint arXiv:2410.05057},
year={2024},
archivePrefix={arXiv},
eprint={2410.05057},
primaryClass={cs.CV cs.LG}
} | feuer2024select: |
arxiv-666533 | 2410.05058 | Improving Object Detection via Local-global Contrastive Learning | <|reference_start|>Improving Object Detection via Local-global Contrastive Learning: Visual domain gaps often impact object detection performance. Image-to-image translation can mitigate this effect, where contrastive approaches enable learning of the image-to-image mapping under unsupervised regimes. However, existing methods often fail to handle content-rich scenes with multiple object instances, which manifests in unsatisfactory detection performance. Sensitivity to such instance-level content is typically only gained through object annotations, which can be expensive to obtain. Towards addressing this issue, we present a novel image-to-image translation method that specifically targets cross-domain object detection. We formulate our approach as a contrastive learning framework with an inductive prior that optimises the appearance of object instances through spatial attention masks, implicitly delineating the scene into foreground regions associated with the target object instances and background non-object regions. Instead of relying on object annotations to explicitly account for object instances during translation, our approach learns to represent objects by contrasting local-global information. This affords investigation of an under-explored challenge: obtaining performant detection, under domain shifts, without relying on object annotations nor detector model fine-tuning. We experiment with multiple cross-domain object detection settings across three challenging benchmarks and report state-of-the-art performance. Project page: https://local-global-detection.github.io<|reference_end|> | arxiv | @article{triantafyllidou2024improving,
title={Improving Object Detection via Local-global Contrastive Learning},
author={Danai Triantafyllidou, Sarah Parisot, Ales Leonardis, Steven McDonagh},
journal={arXiv preprint arXiv:2410.05058},
year={2024},
archivePrefix={arXiv},
eprint={2410.05058},
primaryClass={cs.CV}
} | triantafyllidou2024improving |
arxiv-666534 | 2410.05062 | Large Language Model Based Multi-Objective Optimization for Integrated Sensing and Communications in UAV Networks | <|reference_start|>Large Language Model Based Multi-Objective Optimization for Integrated Sensing and Communications in UAV Networks: This letter investigates an unmanned aerial vehicle (UAV) network with integrated sensing and communication (ISAC) systems, where multiple UAVs simultaneously sense the locations of ground users and provide communication services with radars. To find the trade-off between communication and sensing (C\&S) in the system, we formulate a multi-objective optimization problem (MOP) to maximize the total network utility and the localization Cram\'er-Rao bounds (CRB) of ground users, which jointly optimizes the deployment and power control of UAVs. Inspired by the huge potential of large language models (LLM) for prediction and inference, we propose an LLM-enabled decomposition-based multi-objective evolutionary algorithm (LEDMA) for solving the highly non-convex MOP. We first adopt a decomposition-based scheme to decompose the MOP into a series of optimization sub-problems. We second integrate LLMs as black-box search operators with MOP-specifically designed prompt engineering into the framework of MOEA to solve optimization sub-problems simultaneously. Numerical results demonstrate that the proposed LEDMA can find the clear trade-off between C\&S and outperforms baseline MOEAs in terms of obtained Pareto fronts and convergence.<|reference_end|> | arxiv | @article{li2024large,
title={Large Language Model Based Multi-Objective Optimization for Integrated
Sensing and Communications in UAV Networks},
author={Haoyun Li, Ming Xiao, Kezhi Wang, Dong In Kim, and Merouane Debbah},
journal={arXiv preprint arXiv:2410.05062},
year={2024},
archivePrefix={arXiv},
eprint={2410.05062},
primaryClass={cs.IT eess.SP math.IT}
} | li2024large |
arxiv-666535 | 2410.05063 | Control-oriented Clustering of Visual Latent Representation | <|reference_start|>Control-oriented Clustering of Visual Latent Representation: We initiate a study of the geometry of the visual representation space -- the information channel from the vision encoder to the action decoder -- in an image-based control pipeline learned from behavior cloning. Inspired by the phenomenon of neural collapse (NC) in image classification, we investigate whether a similar law of clustering emerges in the visual representation space. Since image-based control is a regression task without explicitly defined classes, the central piece of the puzzle lies in determining according to what implicit classes the visual features cluster, if such a law exists. Focusing on image-based planar pushing, we posit the most important role of the visual representation in a control task is to convey a goal to the action decoder. We then classify training samples of expert demonstrations into eight "control-oriented" classes based on (a) the relative pose between the object and the target in the input or (b) the relative pose of the object induced by expert actions in the output, where one class corresponds to one relative pose orthant (REPO). Across four different instantiations of architecture, we report the prevalent emergence of control-oriented clustering in the visual representation space according to the eight REPOs. Beyond empirical observation, we show such a law of clustering can be leveraged as an algorithmic tool to improve test-time performance when training a policy with limited expert demonstrations. Particularly, we pretrain the vision encoder using NC as a regularization to encourage control-oriented clustering of the visual features. Surprisingly, such an NC-pretrained vision encoder, when finetuned end-to-end with the action decoder, boosts the test-time performance by 10% to 35% in the low-data regime. Real-world vision-based planar pushing experiments confirmed the surprising advantage of control-oriented visual representation pretraining.<|reference_end|> | arxiv | @article{qi2024control-oriented,
title={Control-oriented Clustering of Visual Latent Representation},
author={Han Qi, Haocheng Yin, Heng Yang},
journal={arXiv preprint arXiv:2410.05063},
year={2024},
archivePrefix={arXiv},
eprint={2410.05063},
primaryClass={cs.LG cs.CV cs.RO}
} | qi2024control-oriented |
arxiv-666536 | 2410.05071 | Function Gradient Approximation with Random Shallow ReLU Networks with Control Applications | <|reference_start|>Function Gradient Approximation with Random Shallow ReLU Networks with Control Applications: Neural networks are widely used to approximate unknown functions in control. A common neural network architecture uses a single hidden layer (i.e. a shallow network), in which the input parameters are fixed in advance and only the output parameters are trained. The typical formal analysis asserts that if output parameters exist to approximate the unknown function with sufficient accuracy, then desired control performance can be achieved. A long-standing theoretical gap was that no conditions existed to guarantee that, for the fixed input parameters, required accuracy could be obtained by training the output parameters. Our recent work has partially closed this gap by demonstrating that if input parameters are chosen randomly, then for any sufficiently smooth function, with high-probability there are output parameters resulting in $O((1/m)^{1/2})$ approximation errors, where $m$ is the number of neurons. However, some applications, notably continuous-time value function approximation, require that the network approximates the both the unknown function and its gradient with sufficient accuracy. In this paper, we show that randomly generated input parameters and trained output parameters result in gradient errors of $O((\log(m)/m)^{1/2})$, and additionally, improve the constants from our prior work. We show how to apply the result to policy evaluation problems.<|reference_end|> | arxiv | @article{lamperski2024function,
title={Function Gradient Approximation with Random Shallow ReLU Networks with
Control Applications},
author={Andrew Lamperski and Siddharth Salapaka},
journal={arXiv preprint arXiv:2410.05071},
year={2024},
archivePrefix={arXiv},
eprint={2410.05071},
primaryClass={cs.LG cs.SY eess.SY math.OC math.ST stat.TH}
} | lamperski2024function |
arxiv-666537 | 2410.05074 | xLSTM-FER: Enhancing Student Expression Recognition with Extended Vision Long Short-Term Memory Network | <|reference_start|>xLSTM-FER: Enhancing Student Expression Recognition with Extended Vision Long Short-Term Memory Network: Student expression recognition has become an essential tool for assessing learning experiences and emotional states. This paper introduces xLSTM-FER, a novel architecture derived from the Extended Long Short-Term Memory (xLSTM), designed to enhance the accuracy and efficiency of expression recognition through advanced sequence processing capabilities for student facial expression recognition. xLSTM-FER processes input images by segmenting them into a series of patches and leveraging a stack of xLSTM blocks to handle these patches. xLSTM-FER can capture subtle changes in real-world students' facial expressions and improve recognition accuracy by learning spatial-temporal relationships within the sequence. Experiments on CK+, RAF-DF, and FERplus demonstrate the potential of xLSTM-FER in expression recognition tasks, showing better performance compared to state-of-the-art methods on standard datasets. The linear computational and memory complexity of xLSTM-FER make it particularly suitable for handling high-resolution images. Moreover, the design of xLSTM-FER allows for efficient processing of non-sequential inputs such as images without additional computation.<|reference_end|> | arxiv | @article{huang2024xlstm-fer:,
title={xLSTM-FER: Enhancing Student Expression Recognition with Extended Vision
Long Short-Term Memory Network},
author={Qionghao Huang, Jili Chen},
journal={arXiv preprint arXiv:2410.05074},
year={2024},
archivePrefix={arXiv},
eprint={2410.05074},
primaryClass={cs.CV}
} | huang2024xlstm-fer: |
arxiv-666538 | 2410.05076 | TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention | <|reference_start|>TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention: Large language models (LLMs) have driven significant advancements across diverse NLP tasks, with long-context models gaining prominence for handling extended inputs. However, the expanding key-value (KV) cache size required by Transformer architectures intensifies the memory constraints, particularly during the decoding phase, creating a significant bottleneck. Existing sparse attention mechanisms designed to address this bottleneck have two limitations: (1) they often fail to reliably identify the most relevant tokens for attention, and (2) they overlook the spatial coherence of token selection across consecutive Transformer layers, which can lead to performance degradation and substantial overhead in token selection. This paper introduces TidalDecode, a simple yet effective algorithm and system for fast and accurate LLM decoding through position persistent sparse attention. TidalDecode leverages the spatial coherence of tokens selected by existing sparse attention methods and introduces a few token selection layers that perform full attention to identify the tokens with the highest attention scores, while all other layers perform sparse attention with the pre-selected tokens. This design enables TidalDecode to substantially reduce the overhead of token selection for sparse attention without sacrificing the quality of the generated results. Evaluation on a diverse set of LLMs and tasks shows that TidalDecode closely matches the generative performance of full attention methods while reducing the LLM decoding latency by up to 2.1x.<|reference_end|> | arxiv | @article{yang2024tidaldecode:,
title={TidalDecode: Fast and Accurate LLM Decoding with Position Persistent
Sparse Attention},
author={Lijie Yang, Zhihao Zhang, Zhuofu Chen, Zikun Li, Zhihao Jia},
journal={arXiv preprint arXiv:2410.05076},
year={2024},
archivePrefix={arXiv},
eprint={2410.05076},
primaryClass={cs.LG cs.AI cs.CL}
} | yang2024tidaldecode: |
arxiv-666539 | 2410.05077 | ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering | <|reference_start|>ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering: Current Large Language Models (LLMs) have shown strong reasoning capabilities in commonsense question answering benchmarks, but the process underlying their success remains largely opaque. As a consequence, recent approaches have equipped LLMs with mechanisms for knowledge retrieval, reasoning and introspection, not only to improve their capabilities but also to enhance the interpretability of their outputs. However, these methods require additional training, hand-crafted templates or human-written explanations. To address these issues, we introduce ZEBRA, a zero-shot question answering framework that combines retrieval, case-based reasoning and introspection and dispenses with the need for additional training of the LLM. Given an input question, ZEBRA retrieves relevant question-knowledge pairs from a knowledge base and generates new knowledge by reasoning over the relationships in these pairs. This generated knowledge is then used to answer the input question, improving the model's performance and interpretability. We evaluate our approach across 8 well-established commonsense reasoning benchmarks, demonstrating that ZEBRA consistently outperforms strong LLMs and previous knowledge integration approaches, achieving an average accuracy improvement of up to 4.5 points.<|reference_end|> | arxiv | @article{molfese2024zebra:,
title={ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense
Question Answering},
author={Francesco Maria Molfese, Simone Conia, Riccardo Orlando and Roberto
Navigli},
journal={arXiv preprint arXiv:2410.05077},
year={2024},
archivePrefix={arXiv},
eprint={2410.05077},
primaryClass={cs.CL}
} | molfese2024zebra: |
arxiv-666540 | 2410.05078 | Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data | <|reference_start|>Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data: Foundation models have recently been shown to be strong data compressors. However, when accounting for their excessive parameter count, their compression ratios are actually inferior to standard compression algorithms. Moreover, naively reducing the number of parameters may not necessarily help as it leads to worse predictions and thus weaker compression. In this paper, we conduct a large-scale empirical study to investigate whether there is a sweet spot where competitive compression ratios with pre-trained vanilla transformers are possible. To this end, we train families of models on 165GB of raw byte sequences of either text, image, or audio data (and all possible combinations of the three) and then compress 1GB of out-of-distribution (OOD) data from each modality. We find that relatively small models (i.e., millions of parameters) can outperform standard general-purpose compression algorithms (gzip, LZMA2) and even domain-specific compressors (PNG, JPEG 2000, FLAC) - even when factoring in parameter count. We achieve, e.g., the lowest compression ratio of 0.49 on OOD audio data (vs. 0.54 for FLAC). To study the impact of model- and dataset scale, we conduct extensive ablations and hyperparameter sweeps, and we investigate the effect of unimodal versus multimodal training. We find that even small models can be trained to perform well on multiple modalities, but, in contrast to previously reported results with large-scale foundation models, transfer to unseen modalities is generally weak.<|reference_end|> | arxiv | @article{heurtel-depeiges2024compression,
title={Compression via Pre-trained Transformers: A Study on Byte-Level
Multimodal Data},
author={David Heurtel-Depeiges, Anian Ruoss, Joel Veness, Tim Genewein},
journal={arXiv preprint arXiv:2410.05078},
year={2024},
archivePrefix={arXiv},
eprint={2410.05078},
primaryClass={cs.LG cs.AI cs.IT math.IT}
} | heurtel-depeiges2024compression |
arxiv-666541 | 2410.05079 | HE-Nav: A High-Performance and Efficient Navigation System for Aerial-Ground Robots in Cluttered Environments | <|reference_start|>HE-Nav: A High-Performance and Efficient Navigation System for Aerial-Ground Robots in Cluttered Environments: Existing AGR navigation systems have advanced in lightly occluded scenarios (e.g., buildings) by employing 3D semantic scene completion networks for voxel occupancy prediction and constructing Euclidean Signed Distance Field (ESDF) maps for collision-free path planning. However, these systems exhibit suboptimal performance and efficiency in cluttered environments with severe occlusions (e.g., dense forests or tall walls), due to limitations arising from perception networks' low prediction accuracy and path planners' high computational overhead. In this paper, we present HE-Nav, the first high-performance and efficient navigation system tailored for AGRs operating in cluttered environments. The perception module utilizes a lightweight semantic scene completion network (LBSCNet), guided by a bird's eye view (BEV) feature fusion and enhanced by an exquisitely designed SCB-Fusion module and attention mechanism. This enables real-time and efficient obstacle prediction in cluttered areas, generating a complete local map. Building upon this completed map, our novel AG-Planner employs the energy-efficient kinodynamic A* search algorithm to guarantee planning is energy-saving. Subsequent trajectory optimization processes yield safe, smooth, dynamically feasible and ESDF-free aerial-ground hybrid paths. Extensive experiments demonstrate that HE-Nav achieved 7x energy savings in real-world situations while maintaining planning success rates of 98% in simulation scenarios. Code and video are available on our project page: https://jmwang0117.github.io/HE-Nav/.<|reference_end|> | arxiv | @article{wang2024he-nav:,
title={HE-Nav: A High-Performance and Efficient Navigation System for
Aerial-Ground Robots in Cluttered Environments},
author={Junming Wang, Zekai Sun, Xiuxian Guan, Tianxiang Shen, Dong Huang,
Zongyuan Zhang, Tianyang Duan, Fangming Liu, Heming Cui},
journal={arXiv preprint arXiv:2410.05079},
year={2024},
archivePrefix={arXiv},
eprint={2410.05079},
primaryClass={cs.RO}
} | wang2024he-nav: |
arxiv-666542 | 2410.05080 | ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery | <|reference_start|>ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery: The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about the true capabilities of such agents. In this work, we argue that for an agent to fully automate scientific discovery, it must be able to complete all essential tasks in the workflow. Thus, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation. To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery. To ensure the scientific authenticity and real-world relevance of our benchmark, we extract 102 tasks from 44 peer-reviewed publications in four disciplines and engage nine subject matter experts to validate them. We unify the target output for every task to a self-contained Python program file and employ an array of evaluation metrics to examine the generated programs, execution results, and costs. Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its annotation quality and scientific plausibility. We also propose two effective strategies to mitigate data contamination concerns. Using our benchmark, we evaluate five open-weight and proprietary LLMs, each with three frameworks: direct prompting, OpenHands, and self-debug. Given three attempts for each task, the best-performing agent can only solve 32.4% of the tasks independently and 34.3% with expert-provided knowledge. These results underscore the limited capacities of current language agents in generating code for data-driven discovery, let alone end-to-end automation for scientific research.<|reference_end|> | arxiv | @article{chen2024scienceagentbench:,
title={ScienceAgentBench: Toward Rigorous Assessment of Language Agents for
Data-Driven Scientific Discovery},
author={Ziru Chen, Shijie Chen, Yuting Ning, Qianheng Zhang, Boshi Wang, Botao
Yu, Yifei Li, Zeyi Liao, Chen Wei, Zitong Lu, Vishal Dey, Mingyi Xue, Frazier
N. Baker, Benjamin Burns, Daniel Adu-Ampratwum, Xuhui Huang, Xia Ning, Song
Gao, Yu Su, Huan Sun},
journal={arXiv preprint arXiv:2410.05080},
year={2024},
archivePrefix={arXiv},
eprint={2410.05080},
primaryClass={cs.CL cs.AI cs.LG}
} | chen2024scienceagentbench: |
arxiv-666543 | 2410.05085 | Explanation sensitivity to the randomness of large language models: the case of journalistic text classification | <|reference_start|>Explanation sensitivity to the randomness of large language models: the case of journalistic text classification: Large language models (LLMs) perform very well in several natural language processing tasks but raise explainability challenges. In this paper, we examine the effect of random elements in the training of LLMs on the explainability of their predictions. We do so on a task of opinionated journalistic text classification in French. Using a fine-tuned CamemBERT model and an explanation method based on relevance propagation, we find that training with different random seeds produces models with similar accuracy but variable explanations. We therefore claim that characterizing the explanations' statistical distribution is needed for the explainability of LLMs. We then explore a simpler model based on textual features which offers stable explanations but is less accurate. Hence, this simpler model corresponds to a different tradeoff between accuracy and explainability. We show that it can be improved by inserting features derived from CamemBERT's explanations. We finally discuss new research directions suggested by our results, in particular regarding the origin of the sensitivity observed in the training randomness.<|reference_end|> | arxiv | @article{bogaert2024explanation,
title={Explanation sensitivity to the randomness of large language models: the
case of journalistic text classification},
author={Jeremie Bogaert, Marie-Catherine de Marneffe, Antonin Descampe, Louis
Escouflaire, Cedrick Fairon, Francois-Xavier Standaert},
journal={Traitement Automatique des Langues 64, 2023, ATALA, Paris},
year={2024},
archivePrefix={arXiv},
eprint={2410.05085},
primaryClass={cs.CL}
} | bogaert2024explanation |
arxiv-666544 | 2410.05087 | On the Formation of Steady Coalitions | <|reference_start|>On the Formation of Steady Coalitions: This paper studies the formation of the grand coalition of a cooperative game by investigating its possible internal dynamics. Each coalition is capable of forcing all players to reconsider the current state of the game when it does not provide sufficient payoff. Different coalitions may ask for contradictory evolutions, leading to the impossibility of the grand coalition forming. In this paper, we give a characterization of the impossibility, for a given state, of finding a new state dominating the previous one such that each aggrieved coalition has a satisfactory payoff. To do so, we develop new polyhedral tools related to a new family of polyhedra, appearing in numerous situations in cooperative game theory.<|reference_end|> | arxiv | @article{mermoud2024on,
title={On the Formation of Steady Coalitions},
author={Dylan Laplace Mermoud},
journal={arXiv preprint arXiv:2410.05087},
year={2024},
archivePrefix={arXiv},
eprint={2410.05087},
primaryClass={econ.TH cs.DM cs.GT}
} | mermoud2024on |
arxiv-666545 | 2410.05090 | HyperINF: Unleashing the HyperPower of the Schulz's Method for Data Influence Estimation | <|reference_start|>HyperINF: Unleashing the HyperPower of the Schulz's Method for Data Influence Estimation: Influence functions provide a principled method to assess the contribution of individual training samples to a specific target. Yet, their high computational costs limit their applications on large-scale models and datasets. Existing methods proposed for influence function approximation have significantly reduced the computational overheads. However, they mostly suffer from inaccurate estimation due to the lack of strong convergence guarantees from the algorithm. The family of hyperpower methods are well-known for their rigorous convergence guarantees on matrix inverse approximation, while the matrix multiplication operation can involve intractable memory and computation costs on large-scale models. We propose HyperINF, an efficient and accurate influence function approximation method which leverages the hyperpower method, specifically Schulz's iterative algorithm. To deal with the computation-intensive matrix multiplication, we incorporate the generalized fisher information (GFIM) as a low-rank approximation of the Hessian matrix, which reduces the memory and computation overheads to constant costs independent of ranks on LoRA-tuned models. We first demonstrate the superior accuracy and stability of \method compared to other baselines through a synthetic convergence simulation for matrix inversion. We further validate the efficacy of \method through extensive real-world data attribution tasks, including mislabeled data detection and data selection for LLM and VLM fine-tuning. On LoRA-tuned models, HyperINF achieves superior downstream performance with minimal memory and computational overhead, while other baselines suffer from significant degradation. Our codebase is available at https://github.com/Blackzxy/HyperINF.<|reference_end|> | arxiv | @article{zhou2024hyperinf:,
title={HyperINF: Unleashing the HyperPower of the Schulz's Method for Data
Influence Estimation},
author={Xinyu Zhou, Simin Fan, Martin Jaggi},
journal={arXiv preprint arXiv:2410.05090},
year={2024},
archivePrefix={arXiv},
eprint={2410.05090},
primaryClass={cs.LG stat.ML}
} | zhou2024hyperinf: |
arxiv-666546 | 2410.05091 | DIMS: Distributed Index for Similarity Search in Metric Spaces | <|reference_start|>DIMS: Distributed Index for Similarity Search in Metric Spaces: Similarity search finds objects that are similar to a given query object based on a similarity metric. As the amount and variety of data continue to grow, similarity search in metric spaces has gained significant attention. Metric spaces can accommodate any type of data and support flexible distance metrics, making similarity search in metric spaces beneficial for many real-world applications, such as multimedia retrieval, personalized recommendation, trajectory analytics, data mining, decision planning, and distributed servers. However, existing studies mostly focus on indexing metric spaces on a single machine, which faces efficiency and scalability limitations with increasing data volume and query amount. Recent advancements in similarity search turn towards distributed methods, while they face challenges including inefficient local data management, unbalanced workload, and low concurrent search efficiency. To this end, we propose DIMS, an efficient Distributed Index for similarity search in Metric Spaces. First, we design a novel three-stage heterogeneous partition to achieve workload balance. Then, we present an effective three-stage indexing structure to efficiently manage objects. We also develop concurrent search methods with filtering and validation techniques that support efficient distributed similarity search. Additionally, we devise a cost-based optimization model to balance communication and computation cost. Extensive experiments demonstrate that DIMS significantly outperforms existing distributed similarity search approaches.<|reference_end|> | arxiv | @article{zhu2024dims:,
title={DIMS: Distributed Index for Similarity Search in Metric Spaces},
author={Yifan Zhu,Chengyang Luo,Tang Qian,Lu Chen,Yunjun Gao,Baihua Zheng},
journal={arXiv preprint arXiv:2410.05091},
year={2024},
archivePrefix={arXiv},
eprint={2410.05091},
primaryClass={cs.DB cs.DC}
} | zhu2024dims: |
arxiv-666547 | 2410.05093 | Reinforcement Learning Control for Autonomous Hydraulic Material Handling Machines with Underactuated Tools | <|reference_start|>Reinforcement Learning Control for Autonomous Hydraulic Material Handling Machines with Underactuated Tools: The precise and safe control of heavy material handling machines presents numerous challenges due to the hard-to-model hydraulically actuated joints and the need for collision-free trajectory planning with a free-swinging end-effector tool. In this work, we propose an RL-based controller that commands the cabin joint and the arm simultaneously. It is trained in a simulation combining data-driven modeling techniques with first-principles modeling. On the one hand, we employ a neural network model to capture the highly nonlinear dynamics of the upper carriage turn hydraulic motor, incorporating explicit pressure prediction to handle delays better. On the other hand, we model the arm as velocity-controllable and the free-swinging end-effector tool as a damped pendulum using first principles. This combined model enhances our simulation environment, enabling the training of RL controllers that can be directly transferred to the real machine. Designed to reach steady-state Cartesian targets, the RL controller learns to leverage the hydraulic dynamics to improve accuracy, maintain high speeds, and minimize end-effector tool oscillations. Our controller, tested on a mid-size prototype material handler, is more accurate than an inexperienced operator and causes fewer tool oscillations. It demonstrates competitive performance even compared to an experienced professional driver.<|reference_end|> | arxiv | @article{spinelli2024reinforcement,
title={Reinforcement Learning Control for Autonomous Hydraulic Material
Handling Machines with Underactuated Tools},
author={Filippo A. Spinelli, Pascal Egli, Julian Nubert, Fang Nan, Thilo
Bleumer, Patrick Goegler, Stephan Brockes, Ferdinand Hofmann, Marco Hutter},
journal={arXiv preprint arXiv:2410.05093},
year={2024},
archivePrefix={arXiv},
eprint={2410.05093},
primaryClass={cs.RO cs.SY eess.SY}
} | spinelli2024reinforcement |
arxiv-666548 | 2410.05094 | On the Structure of Game Provenance and its Applications | <|reference_start|>On the Structure of Game Provenance and its Applications: Provenance in databases has been thoroughly studied for positive and for recursive queries, then for first-order (FO) queries, i.e., having negation but no recursion. Query evaluation can be understood as a two-player game where the opponents argue whether or not a tuple is in the query answer. This game-theoretic approach yields a natural provenance model for FO queries, unifying how and why-not provenance. Here, we study the fine-grain structure of game provenance. A game $G=(V,E)$ consists of positions $V$ and moves $E$ and can be solved by computing the well-founded model of a single, unstratifiable rule: \[ \text{win}(X) \leftarrow \text{move}(X, Y), \neg \, \text{win}(Y). \] In the solved game $G^{\lambda}$, the value of a position $x\,{\in}\,V$ is either won, lost, or drawn. This value is explained by the provenance $\mathscr{P}$(x), i.e., certain (annotated) edges reachable from $x$. We identify seven edge types that give rise to new kinds of provenance, i.e., potential, actual, and primary, and demonstrate that "not all moves are created equal". We describe the new provenance types, show how they can be computed while solving games, and discuss applications, e.g., for abstract argumentation frameworks.<|reference_end|> | arxiv | @article{bowers2024on,
title={On the Structure of Game Provenance and its Applications},
author={Shawn Bowers, Yilin Xia, Bertram Lud"ascher},
journal={2024 IEEE European Symposium on Security and Privacy Workshops
(EuroS&PW), Vienna, Austria, 2024, pp. 602-609},
year={2024},
doi={10.1109/EuroSPW61312.2024.00073},
archivePrefix={arXiv},
eprint={2410.05094},
primaryClass={cs.AI}
} | bowers2024on |
arxiv-666549 | 2410.05095 | Towards a Modern and Lightweight Rendering Engine for Dynamic Robotic Simulations | <|reference_start|>Towards a Modern and Lightweight Rendering Engine for Dynamic Robotic Simulations: Interactive dynamic simulators are an accelerator for developing novel robotic control algorithms and complex systems involving humans and robots. In user training and synthetic data generation applications, a high-fidelity visualization of the simulation is essential. Visual fidelity is dependent on the quality of the computer graphics algorithms used to render the simulated scene. Furthermore, the rendering algorithms must be implemented on the graphics processing unit (GPU) to achieve real-time performance, requiring the use of a graphics application programming interface (API). This paper presents a performance-focused and lightweight rendering engine supporting the Vulkan graphics API. The engine is designed to modernize the legacy rendering pipeline of Asynchronous Multi-Body Framework (AMBF), a dynamic simulation framework used extensively for interactive robotics simulation development. This new rendering engine implements graphical features such as physically based rendering (PBR), anti-aliasing, and ray-traced shadows, significantly improving the image quality of AMBF. Computational experiments show that the engine can render a simulated scene with over seven million triangles while maintaining GPU computation times within two milliseconds.<|reference_end|> | arxiv | @article{allison2024towards,
title={Towards a Modern and Lightweight Rendering Engine for Dynamic Robotic
Simulations},
author={Christopher John Allison, Haoying Zhou, Adnan Munawar, Peter
Kazanzides, Juan Antonio Barragan},
journal={arXiv preprint arXiv:2410.05095},
year={2024},
archivePrefix={arXiv},
eprint={2410.05095},
primaryClass={cs.RO cs.GR cs.SE}
} | allison2024towards |
arxiv-666550 | 2410.05096 | Human-in-the-loop Reasoning For Traffic Sign Detection: Collaborative Approach Yolo With Video-llava | <|reference_start|>Human-in-the-loop Reasoning For Traffic Sign Detection: Collaborative Approach Yolo With Video-llava: Traffic Sign Recognition (TSR) detection is a crucial component of autonomous vehicles. While You Only Look Once (YOLO) is a popular real-time object detection algorithm, factors like training data quality and adverse weather conditions (e.g., heavy rain) can lead to detection failures. These failures can be particularly dangerous when visual similarities between objects exist, such as mistaking a 30 km/h sign for a higher speed limit sign. This paper proposes a method that combines video analysis and reasoning, prompting with a human-in-the-loop guide large vision model to improve YOLOs accuracy in detecting road speed limit signs, especially in semi-real-world conditions. It is hypothesized that the guided prompting and reasoning abilities of Video-LLava can enhance YOLOs traffic sign detection capabilities. This hypothesis is supported by an evaluation based on human-annotated accuracy metrics within a dataset of recorded videos from the CARLA car simulator. The results demonstrate that a collaborative approach combining YOLO with Video-LLava and reasoning can effectively address challenging situations such as heavy rain and overcast conditions that hinder YOLOs detection capabilities.<|reference_end|> | arxiv | @article{azarafza2024human-in-the-loop,
title={Human-in-the-loop Reasoning For Traffic Sign Detection: Collaborative
Approach Yolo With Video-llava},
author={Mehdi Azarafza, Fatima Idrees, Ali Ehteshami Bejnordi, Charles
Steinmetz, Stefan Henkler, Achim Rettberg},
journal={arXiv preprint arXiv:2410.05096},
year={2024},
archivePrefix={arXiv},
eprint={2410.05096},
primaryClass={cs.CV}
} | azarafza2024human-in-the-loop |
arxiv-666551 | 2410.05097 | DreamSat: Towards a General 3D Model for Novel View Synthesis of Space Objects | <|reference_start|>DreamSat: Towards a General 3D Model for Novel View Synthesis of Space Objects: Novel view synthesis (NVS) enables to generate new images of a scene or convert a set of 2D images into a comprehensive 3D model. In the context of Space Domain Awareness, since space is becoming increasingly congested, NVS can accurately map space objects and debris, improving the safety and efficiency of space operations. Similarly, in Rendezvous and Proximity Operations missions, 3D models can provide details about a target object's shape, size, and orientation, allowing for better planning and prediction of the target's behavior. In this work, we explore the generalization abilities of these reconstruction techniques, aiming to avoid the necessity of retraining for each new scene, by presenting a novel approach to 3D spacecraft reconstruction from single-view images, DreamSat, by fine-tuning the Zero123 XL, a state-of-the-art single-view reconstruction model, on a high-quality dataset of 190 high-quality spacecraft models and integrating it into the DreamGaussian framework. We demonstrate consistent improvements in reconstruction quality across multiple metrics, including Contrastive Language-Image Pretraining (CLIP) score (+0.33%), Peak Signal-to-Noise Ratio (PSNR) (+2.53%), Structural Similarity Index (SSIM) (+2.38%), and Learned Perceptual Image Patch Similarity (LPIPS) (+0.16%) on a test set of 30 previously unseen spacecraft images. Our method addresses the lack of domain-specific 3D reconstruction tools in the space industry by leveraging state-of-the-art diffusion models and 3D Gaussian splatting techniques. This approach maintains the efficiency of the DreamGaussian framework while enhancing the accuracy and detail of spacecraft reconstructions. The code for this work can be accessed on GitHub (https://github.com/ARCLab-MIT/space-nvs).<|reference_end|> | arxiv | @article{mathihalli2024dreamsat:,
title={DreamSat: Towards a General 3D Model for Novel View Synthesis of Space
Objects},
author={Nidhi Mathihalli, Audrey Wei, Giovanni Lavezzi, Peng Mun Siew, Victor
Rodriguez-Fernandez, Hodei Urrutxua, and Richard Linares},
journal={arXiv preprint arXiv:2410.05097},
year={2024},
archivePrefix={arXiv},
eprint={2410.05097},
primaryClass={cs.CV cs.LG}
} | mathihalli2024dreamsat: |
arxiv-666552 | 2410.05098 | Constructing probing functions for direct sampling methods for inverse scattering problems with limited-aperture data: finite space framework and deep probing network | <|reference_start|>Constructing probing functions for direct sampling methods for inverse scattering problems with limited-aperture data: finite space framework and deep probing network: This work studies an inverse scattering problem when limited-aperture data are available that are from just one or a few incident fields. This inverse problem is highly ill-posed due to the limited receivers and a few incident fields employed. Solving inverse scattering problems with limited-aperture data is important in applications as collecting full data is often either unrealistic or too expensive. The direct sampling methods (DSMs) with full-aperture data can effectively and stably estimate the locations and geometric shapes of the unknown scatterers with a very limited number of incident waves. However, a direct application of DSMs to the case of limited receivers would face the resolution limit. To break this limitation, we propose a finite space framework with two specific schemes, and an unsupervised deep learning strategy to construct effective probing functions for the DSMs in the case with limited-aperture data. Several representative numerical experiments are carried out to illustrate and compare the performance of different proposed schemes.<|reference_end|> | arxiv | @article{ning2024constructing,
title={Constructing probing functions for direct sampling methods for inverse
scattering problems with limited-aperture data: finite space framework and
deep probing network},
author={Jianfeng Ning, Jun Zou},
journal={arXiv preprint arXiv:2410.05098},
year={2024},
archivePrefix={arXiv},
eprint={2410.05098},
primaryClass={math.NA cs.NA}
} | ning2024constructing |
arxiv-666553 | 2410.05099 | Investigating large language models for their competence in extracting grammatically sound sentences from transcribed noisy utterances | <|reference_start|>Investigating large language models for their competence in extracting grammatically sound sentences from transcribed noisy utterances: Selectively processing noisy utterances while effectively disregarding speech-specific elements poses no considerable challenge for humans, as they exhibit remarkable cognitive abilities to separate semantically significant content from speech-specific noise (i.e. filled pauses, disfluencies, and restarts). These abilities may be driven by mechanisms based on acquired grammatical rules that compose abstract syntactic-semantic structures within utterances. Segments without syntactic and semantic significance are consistently disregarded in these structures. The structures, in tandem with lexis, likely underpin language comprehension and thus facilitate effective communication. In our study, grounded in linguistically motivated experiments, we investigate whether large language models (LLMs) can effectively perform analogical speech comprehension tasks. In particular, we examine the ability of LLMs to extract well-structured utterances from transcriptions of noisy dialogues. We conduct two evaluation experiments in the Polish language scenario, using a~dataset presumably unfamiliar to LLMs to mitigate the risk of data contamination. Our results show that not all extracted utterances are correctly structured, indicating that either LLMs do not fully acquire syntactic-semantic rules or they acquire them but cannot apply them effectively. We conclude that the ability of LLMs to comprehend noisy utterances is still relatively superficial compared to human proficiency in processing them.<|reference_end|> | arxiv | @article{wróblewska2024investigating,
title={Investigating large language models for their competence in extracting
grammatically sound sentences from transcribed noisy utterances},
author={Alina Wr'oblewska},
journal={arXiv preprint arXiv:2410.05099},
year={2024},
archivePrefix={arXiv},
eprint={2410.05099},
primaryClass={cs.CL}
} | wróblewska2024investigating |
arxiv-666554 | 2410.05100 | IGroupSS-Mamba: Interval Group Spatial-Spectral Mamba for Hyperspectral Image Classification | <|reference_start|>IGroupSS-Mamba: Interval Group Spatial-Spectral Mamba for Hyperspectral Image Classification: Hyperspectral image (HSI) classification has garnered substantial attention in remote sensing fields. Recent Mamba architectures built upon the Selective State Space Models (S6) have demonstrated enormous potential in long-range sequence modeling. However, the high dimensionality of hyperspectral data and information redundancy pose challenges to the application of Mamba in HSI classification, suffering from suboptimal performance and computational efficiency. In light of this, this paper investigates a lightweight Interval Group Spatial-Spectral Mamba framework (IGroupSS-Mamba) for HSI classification, which allows for multi-directional and multi-scale global spatial-spectral information extraction in a grouping and hierarchical manner. Technically, an Interval Group S6 Mechanism (IGSM) is developed as the core component, which partitions high-dimensional features into multiple non-overlapping groups at intervals, and then integrates a unidirectional S6 for each group with a specific scanning direction to achieve non-redundant sequence modeling. Compared to conventional applying multi-directional scanning to all bands, this grouping strategy leverages the complementary strengths of different scanning directions while decreasing computational costs. To adequately capture the spatial-spectral contextual information, an Interval Group Spatial-Spectral Block (IGSSB) is introduced, in which two IGSM-based spatial and spectral operators are cascaded to characterize the global spatial-spectral relationship along the spatial and spectral dimensions, respectively. IGroupSS-Mamba is constructed as a hierarchical structure stacked by multiple IGSSB blocks, integrating a pixel aggregation-based downsampling strategy for multiscale spatial-spectral semantic learning from shallow to deep stages. Extensive experiments demonstrate that IGroupSS-Mamba outperforms the state-of-the-art methods.<|reference_end|> | arxiv | @article{he2024igroupss-mamba:,
title={IGroupSS-Mamba: Interval Group Spatial-Spectral Mamba for Hyperspectral
Image Classification},
author={Yan He, Bing Tu, Puzhao Jiang, Bo Liu, Jun Li, and Antonio Plaza},
journal={arXiv preprint arXiv:2410.05100},
year={2024},
archivePrefix={arXiv},
eprint={2410.05100},
primaryClass={cs.CV eess.IV}
} | he2024igroupss-mamba: |
arxiv-666555 | 2410.05101 | CR-CTC: Consistency regularization on CTC for improved speech recognition | <|reference_start|>CR-CTC: Consistency regularization on CTC for improved speech recognition: Connectionist Temporal Classification (CTC) is a widely used method for automatic speech recognition (ASR), renowned for its simplicity and computational efficiency. However, it often falls short in recognition performance compared to transducer or systems combining CTC and attention-based encoder-decoder (CTC/AED). In this work, we propose the Consistency-Regularized CTC (CR-CTC), which enforces consistency between two CTC distributions obtained from different augmented views of the input speech mel-spectrogram. We provide in-depth insights into its essential behaviors from three perspectives: 1) it conducts self-distillation between random pairs of sub-models that process different augmented views; 2) it learns contextual representation through masked prediction for positions within time-masked regions, especially when we increase the amount of time masking; 3) it suppresses the extremely peaky CTC distributions, thereby reducing overfitting and improving the generalization ability. Extensive experiments on LibriSpeech, Aishell-1, and GigaSpeech datasets demonstrate the effectiveness of our CR-CTC, which achieves performance comparable to, or even slightly better than, that of transducer and CTC/AED.<|reference_end|> | arxiv | @article{yao2024cr-ctc:,
title={CR-CTC: Consistency regularization on CTC for improved speech
recognition},
author={Zengwei Yao, Wei Kang, Xiaoyu Yang, Fangjun Kuang, Liyong Guo, Han
Zhu, Zengrui Jin, Zhaoqing Li, Long Lin, Daniel Povey},
journal={arXiv preprint arXiv:2410.05101},
year={2024},
archivePrefix={arXiv},
eprint={2410.05101},
primaryClass={eess.AS cs.LG cs.SD}
} | yao2024cr-ctc: |
arxiv-666556 | 2410.05102 | SparsePO: Controlling Preference Alignment of LLMs via Sparse Token Masks | <|reference_start|>SparsePO: Controlling Preference Alignment of LLMs via Sparse Token Masks: Preference Optimization (PO) has proven an effective step for aligning language models to human-desired behaviors. Current variants, following the offline Direct Preference Optimization objective, have focused on a strict setting where all tokens are contributing signals of KL divergence and rewards to the loss function. However, human preference is not affected by each word in a sequence equally but is often dependent on specific words or phrases, e.g. existence of toxic terms leads to non-preferred responses. Based on this observation, we argue that not all tokens should be weighted equally during PO and propose a flexible objective termed SparsePO, that aims to automatically learn to weight the KL divergence and reward corresponding to each token during PO training. We propose two different variants of weight-masks that can either be derived from the reference model itself or learned on the fly. Notably, our method induces sparsity in the learned masks, allowing the model to learn how to best weight reward and KL divergence contributions at the token level, learning an optimal level of mask sparsity. Extensive experiments on multiple domains, including sentiment control, dialogue, text summarization and text-to-code generation, illustrate that our approach assigns meaningful weights to tokens according to the target task, generates more responses with the desired preference and improves reasoning tasks by up to 2 percentage points compared to other token- and response-level PO methods.<|reference_end|> | arxiv | @article{christopoulou2024sparsepo:,
title={SparsePO: Controlling Preference Alignment of LLMs via Sparse Token
Masks},
author={Fenia Christopoulou, Ronald Cardenas, Gerasimos Lampouras, Haitham
Bou-Ammar and Jun Wang},
journal={arXiv preprint arXiv:2410.05102},
year={2024},
archivePrefix={arXiv},
eprint={2410.05102},
primaryClass={cs.CL cs.AI cs.LG}
} | christopoulou2024sparsepo: |
arxiv-666557 | 2410.05103 | MetaDD: Boosting Dataset Distillation with Neural Network Architecture-Invariant Generalization | <|reference_start|>MetaDD: Boosting Dataset Distillation with Neural Network Architecture-Invariant Generalization: Dataset distillation (DD) entails creating a refined, compact distilled dataset from a large-scale dataset to facilitate efficient training. A significant challenge in DD is the dependency between the distilled dataset and the neural network (NN) architecture used. Training a different NN architecture with a distilled dataset distilled using a specific architecture often results in diminished trainning performance for other architectures. This paper introduces MetaDD, designed to enhance the generalizability of DD across various NN architectures. Specifically, MetaDD partitions distilled data into meta features (i.e., the data's common characteristics that remain consistent across different NN architectures) and heterogeneous features (i.e., the data's unique feature to each NN architecture). Then, MetaDD employs an architecture-invariant loss function for multi-architecture feature alignment, which increases meta features and reduces heterogeneous features in distilled data. As a low-memory consumption component, MetaDD can be seamlessly integrated into any DD methodology. Experimental results demonstrate that MetaDD significantly improves performance across various DD methods. On the Distilled Tiny-Imagenet with Sre2L (50 IPC), MetaDD achieves cross-architecture NN accuracy of up to 30.1\%, surpassing the second-best method (GLaD) by 1.7\%.<|reference_end|> | arxiv | @article{zhao2024metadd:,
title={MetaDD: Boosting Dataset Distillation with Neural Network
Architecture-Invariant Generalization},
author={Yunlong Zhao, Xiaoheng Deng, Xiu Su, Hongyan Xu, Xiuxing Li, Yijing
Liu, Shan You},
journal={arXiv preprint arXiv:2410.05103},
year={2024},
archivePrefix={arXiv},
eprint={2410.05103},
primaryClass={cs.CV}
} | zhao2024metadd: |
arxiv-666558 | 2410.05105 | AI-Enhanced Ethical Hacking: A Linux-Focused Experiment | <|reference_start|>AI-Enhanced Ethical Hacking: A Linux-Focused Experiment: This technical report investigates the integration of generative AI (GenAI), specifically ChatGPT, into the practice of ethical hacking through a comprehensive experimental study and conceptual analysis. Conducted in a controlled virtual environment, the study evaluates GenAI's effectiveness across the key stages of penetration testing on Linux-based target machines operating within a virtual local area network (LAN), including reconnaissance, scanning and enumeration, gaining access, maintaining access, and covering tracks. The findings confirm that GenAI can significantly enhance and streamline the ethical hacking process while underscoring the importance of balanced human-AI collaboration rather than the complete replacement of human input. The report also critically examines potential risks such as misuse, data biases, hallucination, and over-reliance on AI. This research contributes to the ongoing discussion on the ethical use of AI in cybersecurity and highlights the need for continued innovation to strengthen security defences.<|reference_end|> | arxiv | @article{al-sinani2024ai-enhanced,
title={AI-Enhanced Ethical Hacking: A Linux-Focused Experiment},
author={Haitham S. Al-Sinani and Chris J. Mitchell},
journal={arXiv preprint arXiv:2410.05105},
year={2024},
archivePrefix={arXiv},
eprint={2410.05105},
primaryClass={cs.CR cs.AI}
} | al-sinani2024ai-enhanced |
arxiv-666559 | 2410.05106 | Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation | <|reference_start|>Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation: We address the problem of solving strongly convex and smooth minimization problems using stochastic gradient descent (SGD) algorithm with a constant step size. Previous works suggested to combine the Polyak-Ruppert averaging procedure with the Richardson-Romberg extrapolation technique to reduce the asymptotic bias of SGD at the expense of a mild increase of the variance. We significantly extend previous results by providing an expansion of the mean-squared error of the resulting estimator with respect to the number of iterations $n$. More precisely, we show that the mean-squared error can be decomposed into the sum of two terms: a leading one of order $\mathcal{O}(n^{-1/2})$ with explicit dependence on a minimax-optimal asymptotic covariance matrix, and a second-order term of order $\mathcal{O}(n^{-3/4})$ where the power $3/4$ can not be improved in general. We also extend this result to the $p$-th moment bound keeping optimal scaling of the remainders with respect to $n$. Our analysis relies on the properties of the SGD iterates viewed as a time-homogeneous Markov chain. In particular, we establish that this chain is geometrically ergodic with respect to a suitably defined weighted Wasserstein semimetric.<|reference_end|> | arxiv | @article{sheshukova2024nonasymptotic,
title={Nonasymptotic Analysis of Stochastic Gradient Descent with the
Richardson-Romberg Extrapolation},
author={Marina Sheshukova, Denis Belomestny, Alain Durmus, Eric Moulines,
Alexey Naumov, Sergey Samsonov},
journal={arXiv preprint arXiv:2410.05106},
year={2024},
archivePrefix={arXiv},
eprint={2410.05106},
primaryClass={math.OC cs.LG stat.ML}
} | sheshukova2024nonasymptotic |
arxiv-666560 | 2410.05107 | Hyper-Representations: Learning from Populations of Neural Networks | <|reference_start|>Hyper-Representations: Learning from Populations of Neural Networks: This thesis addresses the challenge of understanding Neural Networks through the lens of their most fundamental component: the weights, which encapsulate the learned information and determine the model behavior. At the core of this thesis is a fundamental question: Can we learn general, task-agnostic representations from populations of Neural Network models? The key contribution of this thesis to answer that question are hyper-representations, a self-supervised method to learn representations of NN weights. Work in this thesis finds that trained NN models indeed occupy meaningful structures in the weight space, that can be learned and used. Through extensive experiments, this thesis demonstrates that hyper-representations uncover model properties, such as their performance, state of training, or hyperparameters. Moreover, the identification of regions with specific properties in hyper-representation space allows to sample and generate model weights with targeted properties. This thesis demonstrates applications for fine-tuning, and transfer learning to great success. Lastly, it presents methods that allow hyper-representations to generalize beyond model sizes, architectures, and tasks. The practical implications of that are profound, as it opens the door to foundation models of Neural Networks, which aggregate and instantiate their knowledge across models and architectures. Ultimately, this thesis contributes to the deeper understanding of Neural Networks by investigating structures in their weights which leads to more interpretable, efficient, and adaptable models. By laying the groundwork for representation learning of NN weights, this research demonstrates the potential to change the way Neural Networks are developed, analyzed, and used.<|reference_end|> | arxiv | @article{schürholt2024hyper-representations:,
title={Hyper-Representations: Learning from Populations of Neural Networks},
author={Konstantin Sch"urholt},
journal={arXiv preprint arXiv:2410.05107},
year={2024},
archivePrefix={arXiv},
eprint={2410.05107},
primaryClass={cs.LG}
} | schürholt2024hyper-representations: |
arxiv-666561 | 2410.05109 | Secure Software/Hardware Hybrid In-Field Testing for System-on-Chip | <|reference_start|>Secure Software/Hardware Hybrid In-Field Testing for System-on-Chip: Modern Systems-on-Chip (SoCs) incorporate built-in self-test (BIST) modules deeply integrated into the device's intellectual property (IP) blocks. Such modules handle hardware faults and defects during device operation. As such, BIST results potentially reveal the internal structure and state of the device under test (DUT) and hence open attack vectors. So-called result compaction can overcome this vulnerability by hiding the BIST chain structure but introduces the issues of aliasing and invalid signatures. Software-BIST provides a flexible solution, that can tackle these issues, but suffers from limited observability and fault coverage. In this paper, we hence introduce a low-overhead software/hardware hybrid approach that overcomes the mentioned limitations. It relies on (a) keyed-hash message authentication code (KMAC) available on the SoC providing device-specific secure and valid signatures with zero aliasing and (b) the SoC processor for test scheduling hence increasing DUT availability. The proposed approach offers both on-chip- and remote-testing capabilities. We showcase a RISC-V-based SoC to demonstrate our approach, discussing system overhead and resulting compaction rates.<|reference_end|> | arxiv | @article{mulhem2024secure,
title={Secure Software/Hardware Hybrid In-Field Testing for System-on-Chip},
author={Saleh Mulhem, Christian Ewert, Andrija Neskovic, Amrit Sharma Poudel,
Christoph H"ubner, Mladen Berekovic, and Rainer Buchty},
journal={arXiv preprint arXiv:2410.05109},
year={2024},
archivePrefix={arXiv},
eprint={2410.05109},
primaryClass={cs.AR cs.CR}
} | mulhem2024secure |
arxiv-666562 | 2410.05111 | LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting | <|reference_start|>LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting: LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving. Although recent advancements, such as the use of reconstructed mesh and Neural Radiance Fields (NeRF), have made progress in simulating the physical properties of LiDAR, these methods have struggled to achieve satisfactory frame rates and rendering quality. To address these limitations, we present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes. The vanilla Gaussian Splatting, designed for camera models, cannot be directly applied to LiDAR re-simulation. To bridge the gap between passive camera and active LiDAR, our LiDAR-GS designs a differentiable laser beam splatting, grounded in the LiDAR range view model. This innovation allows for precise surface splatting by projecting lasers onto micro cross-sections, effectively eliminating artifacts associated with local affine approximations. Additionally, LiDAR-GS leverages Neural Gaussian Fields, which further integrate view-dependent clues, to represent key LiDAR properties that are influenced by the incident angle and external factors. Combining these practices with some essential adaptations, e.g., dynamic instances decomposition, our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets. Our source code will be made publicly available.<|reference_end|> | arxiv | @article{chen2024lidar-gs:real-time,
title={LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting},
author={Qifeng Chen, Sheng Yang, Sicong Du, Tao Tang, Peng Chen, Yuchi Huo},
journal={arXiv preprint arXiv:2410.05111},
year={2024},
archivePrefix={arXiv},
eprint={2410.05111},
primaryClass={cs.CV}
} | chen2024lidar-gs:real-time |
arxiv-666563 | 2410.05114 | Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization | <|reference_start|>Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization: In the realm of dermatological diagnoses, where the analysis of dermatoscopic and microscopic skin lesion images is pivotal for the accurate and early detection of various medical conditions, the costs associated with creating diverse and high-quality annotated datasets have hampered the accuracy and generalizability of machine learning models. We propose an innovative unsupervised augmentation solution that harnesses Generative Adversarial Network (GAN) based models and associated techniques over their latent space to generate controlled semiautomatically-discovered semantic variations in dermatoscopic images. We created synthetic images to incorporate the semantic variations and augmented the training data with these images. With this approach, we were able to increase the performance of machine learning models and set a new benchmark amongst non-ensemble based models in skin lesion classification on the HAM10000 dataset; and used the observed analytics and generated models for detailed studies on model explainability, affirming the effectiveness of our solution.<|reference_end|> | arxiv | @article{mekala2024synthetic,
title={Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form
Factorization},
author={Rohan Reddy Mekala, Frederik Pahde, Simon Baur, Sneha Chandrashekar,
Madeline Diep, Markus Wenzel, Eric L. Wisotzky, Galip "Umit Yolcu, Sebastian
Lapuschkin, Jackie Ma, Peter Eisert, Mikael Lindvall, Adam Porter, and
Wojciech Samek},
journal={arXiv preprint arXiv:2410.05114},
year={2024},
archivePrefix={arXiv},
eprint={2410.05114},
primaryClass={cs.CV cs.AI}
} | mekala2024synthetic |
arxiv-666564 | 2410.05115 | AlphaRouter: Quantum Circuit Routing with Reinforcement Learning and Tree Search | <|reference_start|>AlphaRouter: Quantum Circuit Routing with Reinforcement Learning and Tree Search: Quantum computers have the potential to outperform classical computers in important tasks such as optimization and number factoring. They are characterized by limited connectivity, which necessitates the routing of their computational bits, known as qubits, to specific locations during program execution to carry out quantum operations. Traditionally, the NP-hard optimization problem of minimizing the routing overhead has been addressed through sub-optimal rule-based routing techniques with inherent human biases embedded within the cost function design. This paper introduces a solution that integrates Monte Carlo Tree Search (MCTS) with Reinforcement Learning (RL). Our RL-based router, called AlphaRouter, outperforms the current state-of-the-art routing methods and generates quantum programs with up to $20\%$ less routing overhead, thus significantly enhancing the overall efficiency and feasibility of quantum computing.<|reference_end|> | arxiv | @article{tang2024alpharouter:,
title={AlphaRouter: Quantum Circuit Routing with Reinforcement Learning and
Tree Search},
author={Wei Tang, Yiheng Duan, Yaroslav Kharkov, Rasool Fakoor, Eric Kessler,
Yunong Shi},
journal={arXiv preprint arXiv:2410.05115},
year={2024},
archivePrefix={arXiv},
eprint={2410.05115},
primaryClass={quant-ph cs.AI cs.SY eess.SY}
} | tang2024alpharouter: |
arxiv-666565 | 2410.05116 | Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning | <|reference_start|>Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning: Controllable generation through Stable Diffusion (SD) fine-tuning aims to improve fidelity, safety, and alignment with human guidance. Existing reinforcement learning from human feedback methods usually rely on predefined heuristic reward functions or pretrained reward models built on large-scale datasets, limiting their applicability to scenarios where collecting such data is costly or difficult. To effectively and efficiently utilize human feedback, we develop a framework, HERO, which leverages online human feedback collected on the fly during model learning. Specifically, HERO features two key mechanisms: (1) Feedback-Aligned Representation Learning, an online training method that captures human feedback and provides informative learning signals for fine-tuning, and (2) Feedback-Guided Image Generation, which involves generating images from SD's refined initialization samples, enabling faster convergence towards the evaluator's intent. We demonstrate that HERO is 4x more efficient in online feedback for body part anomaly correction compared to the best existing method. Additionally, experiments show that HERO can effectively handle tasks like reasoning, counting, personalization, and reducing NSFW content with only 0.5K online feedback.<|reference_end|> | arxiv | @article{hiranaka2024human-feedback,
title={Human-Feedback Efficient Reinforcement Learning for Online Diffusion
Model Finetuning},
author={Ayano Hiranaka, Shang-Fu Chen, Chieh-Hsin Lai, Dongjun Kim, Naoki
Murata, Takashi Shibuya, Wei-Hsiang Liao, Shao-Hua Sun, Yuki Mitsufuji},
journal={arXiv preprint arXiv:2410.05116},
year={2024},
archivePrefix={arXiv},
eprint={2410.05116},
primaryClass={cs.LG cs.AI cs.CV cs.HC}
} | hiranaka2024human-feedback |
arxiv-666566 | 2410.05117 | Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound Framework and Characterization for Bandit Learnability | <|reference_start|>Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound Framework and Characterization for Bandit Learnability: In this paper, we develop a unified framework for lower bound methods in statistical estimation and interactive decision making. Classical lower bound techniques -- such as Fano's inequality, Le Cam's method, and Assouad's lemma -- have been central to the study of minimax risk in statistical estimation, yet they are insufficient for the analysis of methods that collect data in an interactive manner. The recent minimax lower bounds for interactive decision making via the Decision-Estimation Coefficient (DEC) appear to be genuinely different from the classical methods. We propose a unified view of these distinct methodologies through a general algorithmic lower bound method. We further introduce a novel complexity measure, decision dimension, which facilitates the derivation of new lower bounds for interactive decision making. In particular, decision dimension provides a characterization of bandit learnability for any structured bandit model class. Further, we characterize the sample complexity of learning convex model class up to a polynomial gap with the decision dimension, addressing the remaining gap between upper and lower bounds in Foster et al. (2021, 2023).<|reference_end|> | arxiv | @article{chen2024assouad,,
title={Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound
Framework and Characterization for Bandit Learnability},
author={Fan Chen, Dylan J. Foster, Yanjun Han, Jian Qian, Alexander Rakhlin,
Yunbei Xu},
journal={arXiv preprint arXiv:2410.05117},
year={2024},
archivePrefix={arXiv},
eprint={2410.05117},
primaryClass={cs.LG cs.IT math.IT math.ST stat.ML stat.TH}
} | chen2024assouad, |
arxiv-666567 | 2410.05121 | Foil Conductor Model for Efficient Simulation of HTS Coils in Large Scale Applications | <|reference_start|>Foil Conductor Model for Efficient Simulation of HTS Coils in Large Scale Applications: Homogenization techniques are an appealing approach to reduce computational complexity in systems containing coils with large numbers of high temperature superconductor (HTS) tapes. Resolving all the coated conductor layers and turns in coils is often computationally prohibitive. In this paper, we extend the foil conductor model, well-known in normal conducting applications, to applications with insulated HTS coils. To enhance the numerical performance of the model, the conventional formulation based on A-V is extended to J-A-V. The model is verified to be suitable for simulations of superconductors and to accelerate the calculations compared to resolving all the individual layers. The performance of both the A-V and J-A-V formulated models is examined, and the J-A-V variant is concluded to be advantageous.<|reference_end|> | arxiv | @article{paakkunainen2024foil,
title={Foil Conductor Model for Efficient Simulation of HTS Coils in Large
Scale Applications},
author={Elias Paakkunainen, Louis Denis, Christophe Geuzaine, Paavo Rasilo,
Sebastian Sch"ops},
journal={arXiv preprint arXiv:2410.05121},
year={2024},
archivePrefix={arXiv},
eprint={2410.05121},
primaryClass={cs.CE}
} | paakkunainen2024foil |
arxiv-666568 | 2410.05123 | Upgrading SPHERE with the second stage AO system SAXO+: frequency-based data-driven controller for adaptive optics | <|reference_start|>Upgrading SPHERE with the second stage AO system SAXO+: frequency-based data-driven controller for adaptive optics: This study introduces a novel frequency-based data-driven controller for adaptive optics, using power spectral density for optimization while ensuring stability criteria. It addresses disturbance rejection, command amplitude constraints and system transfer functions through convex optimization to obtain an optimal control in an infinite input response filter form. Evaluated within the SAXO+ project, it demonstrates efficacy under diverse atmospheric conditions and operational scenarios. The proposed controller is tested in both standard and disentangled adaptive optics schemes, showcasing its adaptability and performance. Experimental validation is conducted using the COMPASS simulation tool, affirming the controller's promise for enhancing adaptive optics systems in real-world applications.<|reference_end|> | arxiv | @article{dinis2024upgrading,
title={Upgrading SPHERE with the second stage AO system SAXO+: frequency-based
data-driven controller for adaptive optics},
author={Isaac Dinis, Franc{c}ois Wildi, Damien S'egransan, Vaibhav Gupta,
Alireza Karimi, Michel Tallon, Isabelle Bosc, Maud Langlois, Magali Loupias,
Cl'ementine Bechet, Eric Thi'ebaut, Charles Goulas, Florian Ferreira,
Anthony Boccaletti, Fabrice Vidal, Caroline Kulcsar, Henri-Franc{c}ois
Raynaud, Nicolas Galland, Markus Kasper, Julien Milli, David Mouillet, Laura
Schreiber, Emiliano Diolaiti, Raffaele Gratton, Gael Chauvin},
journal={arXiv preprint arXiv:2410.05123},
year={2024},
doi={10.1117/12.3020179},
archivePrefix={arXiv},
eprint={2410.05123},
primaryClass={eess.SY astro-ph.IM cs.SY}
} | dinis2024upgrading |
arxiv-666569 | 2410.05124 | Agnostic Smoothed Online Learning | <|reference_start|>Agnostic Smoothed Online Learning: Classical results in statistical learning typically consider two extreme data-generating models: i.i.d. instances from an unknown distribution, or fully adversarial instances, often much more challenging statistically. To bridge the gap between these models, recent work introduced the smoothed framework, in which at each iteration an adversary generates instances from a distribution constrained to have density bounded by $\sigma^{-1}$ compared to some fixed base measure $\mu$. This framework interpolates between the i.i.d. and adversarial cases, depending on the value of $\sigma$. For the classical online prediction problem, most prior results in smoothed online learning rely on the arguably strong assumption that the base measure $\mu$ is known to the learner, contrasting with standard settings in the PAC learning or consistency literature. We consider the general agnostic problem in which the base measure is unknown and values are arbitrary. Along this direction, Block et al. showed that empirical risk minimization has sublinear regret under the well-specified assumption. We propose an algorithm R-Cover based on recursive coverings which is the first to guarantee sublinear regret for agnostic smoothed online learning without prior knowledge of $\mu$. For classification, we prove that R-Cover has adaptive regret $\tilde O(\sqrt{dT/\sigma})$ for function classes with VC dimension $d$, which is optimal up to logarithmic factors. For regression, we establish that R-Cover has sublinear oblivious regret for function classes with polynomial fat-shattering dimension growth.<|reference_end|> | arxiv | @article{blanchard2024agnostic,
title={Agnostic Smoothed Online Learning},
author={Mo"ise Blanchard},
journal={arXiv preprint arXiv:2410.05124},
year={2024},
archivePrefix={arXiv},
eprint={2410.05124},
primaryClass={stat.ML cs.LG}
} | blanchard2024agnostic |
arxiv-666570 | 2410.05127 | Last Iterate Convergence in Monotone Mean Field Games | <|reference_start|>Last Iterate Convergence in Monotone Mean Field Games: Mean Field Game (MFG) is a framework utilized to model and approximate the behavior of a large number of agents, and the computation of equilibria in MFG has been a subject of interest. Despite the proposal of methods to approximate the equilibria, algorithms where the sequence of updated policy converges to equilibrium, specifically those exhibiting last-iterate convergence, have been limited. We propose the use of a simple, proximal-point-type algorithm to compute equilibria for MFGs. Subsequently, we provide the first last-iterate convergence guarantee under the Lasry--Lions-type monotonicity condition. We further employ the Mirror Descent algorithm for the regularized MFG to efficiently approximate the update rules of the proximal point method for MFGs. We demonstrate that the algorithm can approximate with an accuracy of $\varepsilon$ after $\mathcal{O}({\log(1/\varepsilon)})$ iterations. This research offers a tractable approach for large-scale and large-population games.<|reference_end|> | arxiv | @article{isobe2024last,
title={Last Iterate Convergence in Monotone Mean Field Games},
author={Noboru Isobe, Kenshi Abe, Kaito Ariu},
journal={arXiv preprint arXiv:2410.05127},
year={2024},
archivePrefix={arXiv},
eprint={2410.05127},
primaryClass={cs.GT cs.AI}
} | isobe2024last |
arxiv-666571 | 2410.05130 | Scalable and Accurate Graph Reasoning with LLM-based Multi-Agents | <|reference_start|>Scalable and Accurate Graph Reasoning with LLM-based Multi-Agents: Recent research has explored the use of Large Language Models (LLMs) for tackling complex graph reasoning tasks. However, due to the intricacies of graph structures and the inherent limitations of LLMs in handling long text, current approaches often fail to deliver satisfactory accuracy, even on small-scale graphs and simple tasks. To address these challenges, we introduce GraphAgent-Reasoner, a fine-tuning-free framework that utilizes a multi-agent collaboration strategy for explicit and precise graph reasoning. Inspired by distributed graph computation theory, our framework decomposes graph problems into smaller, node-centric tasks that are distributed among multiple agents. The agents collaborate to solve the overall problem, significantly reducing the amount of information and complexity handled by a single LLM, thus enhancing the accuracy of graph reasoning. By simply increasing the number of agents, GraphAgent-Reasoner can efficiently scale to accommodate larger graphs with over 1,000 nodes. Evaluated on the GraphInstruct dataset, our framework demonstrates near-perfect accuracy on polynomial-time graph reasoning tasks, significantly outperforming the best available models, both closed-source and fine-tuned open-source variants. Our framework also demonstrates the capability to handle real-world graph reasoning applications such as webpage importance analysis.<|reference_end|> | arxiv | @article{hu2024scalable,
title={Scalable and Accurate Graph Reasoning with LLM-based Multi-Agents},
author={Yuwei Hu, Runlin Lei, Xinyi Huang, Zhewei Wei, Yongchao Liu},
journal={arXiv preprint arXiv:2410.05130},
year={2024},
archivePrefix={arXiv},
eprint={2410.05130},
primaryClass={cs.AI}
} | hu2024scalable |
arxiv-666572 | 2410.05131 | Enhancing Job Interview Preparation Through Immersive Experiences Using Photorealistic, AI-powered Metahuman Avatars | <|reference_start|>Enhancing Job Interview Preparation Through Immersive Experiences Using Photorealistic, AI-powered Metahuman Avatars: This study will investigate the user experience while interacting with highly photorealistic virtual job interviewer avatars in Virtual Reality (VR), Augmented Reality (AR), and on a 2D screen. Having a precise speech recognition mechanism, our virtual character performs a mock-up software engineering job interview to adequately immerse the user in a life-like scenario. To evaluate the efficiency of our system, we measure factors such as the provoked level of anxiety, social presence, self-esteem, and intrinsic motivation. This research is a work in progress with a prospective within-subject user study including approximately 40 participants. All users will engage with three job interview conditions (VR, AR, and desktop) and provide their feedback. Additionally, users' bio-physical responses will be collected using a biosensor to measure the level of anxiety during the job interview.<|reference_end|> | arxiv | @article{ashrafi2024enhancing,
title={Enhancing Job Interview Preparation Through Immersive Experiences Using
Photorealistic, AI-powered Metahuman Avatars},
author={Navid Ashrafi, Francesco Vona, Carina Ringsdorf, Christian Hertel,
Luca Toni, Sarina Kailer, Alice Bartels, Tanja Kojic, Jan-Niklas Voigt-Antons},
journal={arXiv preprint arXiv:2410.05131},
year={2024},
archivePrefix={arXiv},
eprint={2410.05131},
primaryClass={cs.HC}
} | ashrafi2024enhancing |
arxiv-666573 | 2410.05133 | A Digital Twin Framework for Liquid-cooled Supercomputers as Demonstrated at Exascale | <|reference_start|>A Digital Twin Framework for Liquid-cooled Supercomputers as Demonstrated at Exascale: We present ExaDigiT, an open-source framework for developing comprehensive digital twins of liquid-cooled supercomputers. It integrates three main modules: (1) a resource allocator and power simulator, (2) a transient thermo-fluidic cooling model, and (3) an augmented reality model of the supercomputer and central energy plant. The framework enables the study of "what-if" scenarios, system optimizations, and virtual prototyping of future systems. Using Frontier as a case study, we demonstrate the framework's capabilities by replaying six months of system telemetry for systematic verification and validation. Such a comprehensive analysis of a liquid-cooled exascale supercomputer is the first of its kind. ExaDigiT elucidates complex transient cooling system dynamics, runs synthetic or real workloads, and predicts energy losses due to rectification and voltage conversion. Throughout our paper, we present lessons learned to benefit HPC practitioners developing similar digital twins. We envision the digital twin will be a key enabler for sustainable, energy-efficient supercomputing.<|reference_end|> | arxiv | @article{brewer2024a,
title={A Digital Twin Framework for Liquid-cooled Supercomputers as
Demonstrated at Exascale},
author={Wesley Brewer, Matthias Maiterth, Vineet Kumar, Rafal Wojda, Sedrick
Bouknight, Jesse Hines, Woong Shin, Scott Greenwood, David Grant, Wesley
Williams, and Feiyi Wang},
journal={arXiv preprint arXiv:2410.05133},
year={2024},
doi={10.1109/SC41406.2024.00029},
archivePrefix={arXiv},
eprint={2410.05133},
primaryClass={cs.DC cs.LG}
} | brewer2024a |
arxiv-666574 | 2410.05135 | Quantization Design for Resistive Memories With Multiple Reads | <|reference_start|>Quantization Design for Resistive Memories With Multiple Reads: Due to the crossbar array architecture, the sneak-path problem severely degrades the data integrity in the resistive random access memory (ReRAM). In this letter, we investigate the channel quantizer design for ReRAM arrays with multiple reads, which is a typical technique to improve the data recovery performance of data storage systems. Starting with a quantized channel model of ReRAM with multiple reads, we first derive a general approach for designing the channel quantizer, for both single-bit and multiple-bit quantization. We then focus on the single-bit quantization, which is highly suitable for practical applications of ReRAM. In particular, we propose a semi-analytical approach to design the multiple-read single-bit quantizer with less complexity. We also derive the theoretical bit-error probability of the optimal single-bit detector/quantization as the benchmark. Results indicate that the multiple-read operation is effective in improving the error rate performance of ReRAM. Moreover, our proposed multiple-read detector outperforms the prior art detector and achieves the performance of the optimal detector.<|reference_end|> | arxiv | @article{mei2024quantization,
title={Quantization Design for Resistive Memories With Multiple Reads},
author={Zhen Mei, Kui Cai, Long Shi, and Jun Li},
journal={arXiv preprint arXiv:2410.05135},
year={2024},
archivePrefix={arXiv},
eprint={2410.05135},
primaryClass={cs.IT math.IT}
} | mei2024quantization |
arxiv-666575 | 2410.05136 | LOTOS: Layer-wise Orthogonalization for Training Robust Ensembles | <|reference_start|>LOTOS: Layer-wise Orthogonalization for Training Robust Ensembles: Transferability of adversarial examples is a well-known property that endangers all classification models, even those that are only accessible through black-box queries. Prior work has shown that an ensemble of models is more resilient to transferability: the probability that an adversarial example is effective against most models of the ensemble is low. Thus, most ongoing research focuses on improving ensemble diversity. Another line of prior work has shown that Lipschitz continuity of the models can make models more robust since it limits how a model's output changes with small input perturbations. In this paper, we study the effect of Lipschitz continuity on transferability rates. We show that although a lower Lipschitz constant increases the robustness of a single model, it is not as beneficial in training robust ensembles as it increases the transferability rate of adversarial examples across models in the ensemble. Therefore, we introduce LOTOS, a new training paradigm for ensembles, which counteracts this adverse effect. It does so by promoting orthogonality among the top-$k$ sub-spaces of the transformations of the corresponding affine layers of any pair of models in the ensemble. We theoretically show that $k$ does not need to be large for convolutional layers, which makes the computational overhead negligible. Through various experiments, we show LOTOS increases the robust accuracy of ensembles of ResNet-18 models by $6$ percentage points (p.p) against black-box attacks on CIFAR-10. It is also capable of combining with the robustness of prior state-of-the-art methods for training robust ensembles to enhance their robust accuracy by $10.7$ p.p.<|reference_end|> | arxiv | @article{ebrahimpour-boroojeny2024lotos:,
title={LOTOS: Layer-wise Orthogonalization for Training Robust Ensembles},
author={Ali Ebrahimpour-Boroojeny, Hari Sundaram, and Varun Chandrasekaran},
journal={arXiv preprint arXiv:2410.05136},
year={2024},
archivePrefix={arXiv},
eprint={2410.05136},
primaryClass={cs.LG stat.ML}
} | ebrahimpour-boroojeny2024lotos: |
arxiv-666576 | 2410.05139 | Generative Reduced Basis Method | <|reference_start|>Generative Reduced Basis Method: We present a generative reduced basis (RB) approach to construct reduced order models for parametrized partial differential equations. Central to this approach is the construction of generative RB spaces that provide rapidly convergent approximations of the solution manifold. We introduce a generative snapshot method to generate significantly larger sets of snapshots from a small initial set of solution snapshots. This method leverages multivariate nonlinear transformations to enrich the RB spaces, allowing for a more accurate approximation of the solution manifold than commonly used techniques such as proper orthogonal decomposition and greedy sampling. The key components of our approach include (i) a Galerkin projection of the full order model onto the generative RB space to form the reduced order model; (ii) a posteriori error estimates to certify the accuracy of the reduced order model; and (iii) an offline-online decomposition to separate the computationally intensive model construction, performed once during the offline stage, from the real-time model evaluations performed many times during the online stage. The error estimates allow us to efficiently explore the parameter space and select parameter points that maximize the accuracy of the reduced order model. Through numerical experiments, we demonstrate that the generative RB method not only improves the accuracy of the reduced order model but also provides tight error estimates.<|reference_end|> | arxiv | @article{nguyen2024generative,
title={Generative Reduced Basis Method},
author={Ngoc Cuong Nguyen},
journal={arXiv preprint arXiv:2410.05139},
year={2024},
archivePrefix={arXiv},
eprint={2410.05139},
primaryClass={math.NA cs.NA}
} | nguyen2024generative |
arxiv-666577 | 2410.05140 | Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis | <|reference_start|>Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis: Bilevel optimization has recently attracted considerable attention due to its abundant applications in machine learning problems. However, existing methods rely on prior knowledge of problem parameters to determine stepsizes, resulting in significant effort in tuning stepsizes when these parameters are unknown. In this paper, we propose two novel tuning-free algorithms, D-TFBO and S-TFBO. D-TFBO employs a double-loop structure with stepsizes adaptively adjusted by the "inverse of cumulative gradient norms" strategy. S-TFBO features a simpler fully single-loop structure that updates three variables simultaneously with a theory-motivated joint design of adaptive stepsizes for all variables. We provide a comprehensive convergence analysis for both algorithms and show that D-TFBO and S-TFBO respectively require $O(\frac{1}{\epsilon})$ and $O(\frac{1}{\epsilon}\log^4(\frac{1}{\epsilon}))$ iterations to find an $\epsilon$-accurate stationary point, (nearly) matching their well-tuned counterparts using the information of problem parameters. Experiments on various problems show that our methods achieve performance comparable to existing well-tuned approaches, while being more robust to the selection of initial stepsizes. To the best of our knowledge, our methods are the first to completely eliminate the need for stepsize tuning, while achieving theoretical guarantees.<|reference_end|> | arxiv | @article{yang2024tuning-free,
title={Tuning-Free Bilevel Optimization: New Algorithms and Convergence
Analysis},
author={Yifan Yang, Hao Ban, Minhui Huang, Shiqian Ma, Kaiyi Ji},
journal={arXiv preprint arXiv:2410.05140},
year={2024},
archivePrefix={arXiv},
eprint={2410.05140},
primaryClass={cs.LG stat.ML}
} | yang2024tuning-free |
arxiv-666578 | 2410.05143 | Leveraging Multimodal Diffusion Models to Accelerate Imaging with Side Information | <|reference_start|>Leveraging Multimodal Diffusion Models to Accelerate Imaging with Side Information: Diffusion models have found phenomenal success as expressive priors for solving inverse problems, but their extension beyond natural images to more structured scientific domains remains limited. Motivated by applications in materials science, we aim to reduce the number of measurements required from an expensive imaging modality of interest, by leveraging side information from an auxiliary modality that is much cheaper to obtain. To deal with the non-differentiable and black-box nature of the forward model, we propose a framework to train a multimodal diffusion model over the joint modalities, turning inverse problems with black-box forward models into simple linear inpainting problems. Numerically, we demonstrate the feasibility of training diffusion models over materials imagery data, and show that our approach achieves superior image reconstruction by leveraging the available side information, requiring significantly less amount of data from the expensive microscopy modality.<|reference_end|> | arxiv | @article{efimov2024leveraging,
title={Leveraging Multimodal Diffusion Models to Accelerate Imaging with Side
Information},
author={Timofey Efimov, Harry Dong, Megna Shah, Jeff Simmons, Sean Donegan,
Yuejie Chi},
journal={arXiv preprint arXiv:2410.05143},
year={2024},
archivePrefix={arXiv},
eprint={2410.05143},
primaryClass={cs.CV}
} | efimov2024leveraging |
arxiv-666579 | 2410.05146 | CTC-GMM: CTC guided modality matching for fast and accurate streaming speech translation | <|reference_start|>CTC-GMM: CTC guided modality matching for fast and accurate streaming speech translation: Models for streaming speech translation (ST) can achieve high accuracy and low latency if they're developed with vast amounts of paired audio in the source language and written text in the target language. Yet, these text labels for the target language are often pseudo labels due to the prohibitive cost of manual ST data labeling. In this paper, we introduce a methodology named Connectionist Temporal Classification guided modality matching (CTC-GMM) that enhances the streaming ST model by leveraging extensive machine translation (MT) text data. This technique employs CTC to compress the speech sequence into a compact embedding sequence that matches the corresponding text sequence, allowing us to utilize matched {source-target} language text pairs from the MT corpora to refine the streaming ST model further. Our evaluations with FLEURS and CoVoST2 show that the CTC-GMM approach can increase translation accuracy relatively by 13.9% and 6.4% respectively, while also boosting decoding speed by 59.7% on GPU.<|reference_end|> | arxiv | @article{zhao2024ctc-gmm:,
title={CTC-GMM: CTC guided modality matching for fast and accurate streaming
speech translation},
author={Rui Zhao, Jinyu Li, Ruchao Fan, Matt Post},
journal={arXiv preprint arXiv:2410.05146},
year={2024},
archivePrefix={arXiv},
eprint={2410.05146},
primaryClass={cs.CL cs.AI eess.AS}
} | zhao2024ctc-gmm: |
arxiv-666580 | 2410.05147 | PAMLR: A Passive-Active Multi-Armed Bandit-Based Solution for LoRa Channel Allocation | <|reference_start|>PAMLR: A Passive-Active Multi-Armed Bandit-Based Solution for LoRa Channel Allocation: Achieving low duty cycle operation in low-power wireless networks in urban environments is complicated by the complex and variable dynamics of external interference and fading. We explore the use of reinforcement learning for achieving low power consumption for the task of optimal selection of channels. The learning relies on a hybrid of passive channel sampling for dealing with external interference and active channel sampling for dealing with fading. Our solution, Passive-Active Multi-armed bandit for LoRa (PAMLR, pronounced "Pamela"), balances the two types of samples to achieve energy-efficient channel selection: active channel measurements are tuned to an appropriately low level to update noise thresholds, and to compensate passive channel measurements are tuned to an appropriately high level for selecting the top-most channels from channel exploration using the noise thresholds. The rates of both types of samples are adapted in response to channel dynamics. Based on extensive testing in multiple environments in different cities, we validate that PAMLR can maintain excellent communication quality, as demonstrated by a low SNR regret compared to the optimal channel allocation policy, while substantially minimizing the energy cost associated with channel measurements.<|reference_end|> | arxiv | @article{yun2024pamlr:,
title={PAMLR: A Passive-Active Multi-Armed Bandit-Based Solution for LoRa
Channel Allocation},
author={Jihoon Yun and Chengzhang Li and Anish Arora},
journal={arXiv preprint arXiv:2410.05147},
year={2024},
doi={10.1145/3600100.3623725},
archivePrefix={arXiv},
eprint={2410.05147},
primaryClass={cs.NI cs.LG}
} | yun2024pamlr: |
arxiv-666581 | 2410.05151 | Editing Music with Melody and Text: Using ControlNet for Diffusion Transformer | <|reference_start|>Editing Music with Melody and Text: Using ControlNet for Diffusion Transformer: Despite the significant progress in controllable music generation and editing, challenges remain in the quality and length of generated music due to the use of Mel-spectrogram representations and UNet-based model structures. To address these limitations, we propose a novel approach using a Diffusion Transformer (DiT) augmented with an additional control branch using ControlNet. This allows for long-form and variable-length music generation and editing controlled by text and melody prompts. For more precise and fine-grained melody control, we introduce a novel top-$k$ constant-Q Transform representation as the melody prompt, reducing ambiguity compared to previous representations (e.g., chroma), particularly for music with multiple tracks or a wide range of pitch values. To effectively balance the control signals from text and melody prompts, we adopt a curriculum learning strategy that progressively masks the melody prompt, resulting in a more stable training process. Experiments have been performed on text-to-music generation and music-style transfer tasks using open-source instrumental recording data. The results demonstrate that by extending StableAudio, a pre-trained text-controlled DiT model, our approach enables superior melody-controlled editing while retaining good text-to-music generation performance. These results outperform a strong MusicGen baseline in terms of both text-based generation and melody preservation for editing. Audio examples can be found at https://stable-audio-control.github.io/web/.<|reference_end|> | arxiv | @article{hou2024editing,
title={Editing Music with Melody and Text: Using ControlNet for Diffusion
Transformer},
author={Siyuan Hou, Shansong Liu, Ruibin Yuan, Wei Xue, Ying Shan, Mangsuo
Zhao, Chao Zhang},
journal={arXiv preprint arXiv:2410.05151},
year={2024},
archivePrefix={arXiv},
eprint={2410.05151},
primaryClass={eess.AS cs.SD}
} | hou2024editing |
arxiv-666582 | 2410.05152 | Real-Time Truly-Coupled Lidar-Inertial Motion Correction and Spatiotemporal Dynamic Object Detection | <|reference_start|>Real-Time Truly-Coupled Lidar-Inertial Motion Correction and Spatiotemporal Dynamic Object Detection: Over the past decade, lidars have become a cornerstone of robotics state estimation and perception thanks to their ability to provide accurate geometric information about their surroundings in the form of 3D scans. Unfortunately, most of nowadays lidars do not take snapshots of the environment but sweep the environment over a period of time (typically around 100 ms). Such a rolling-shutter-like mechanism introduces motion distortion into the collected lidar scan, thus hindering downstream perception applications. In this paper, we present a novel method for motion distortion correction of lidar data by tightly coupling lidar with Inertial Measurement Unit (IMU) data. The motivation of this work is a map-free dynamic object detection based on lidar. The proposed lidar data undistortion method relies on continuous preintegrated of IMU measurements that allow parameterising the sensors' continuous 6-DoF trajectory using solely eleven discrete state variables (biases, initial velocity, and gravity direction). The undistortion consists of feature-based distance minimisation of point-to-line and point-to-plane residuals in a non-linear least-square formulation. Given undistorted geometric data over a short temporal window, the proposed pipeline computes the spatiotemporal normal vector of each of the lidar points. The temporal component of the normals is a proxy for the corresponding point's velocity, therefore allowing for learning-free dynamic object classification without the need for registration in a global reference frame. We demonstrate the soundness of the proposed method and its different components using public datasets and compare them with state-of-the-art lidar-inertial state estimation and dynamic object detection algorithms.<|reference_end|> | arxiv | @article{gentil2024real-time,
title={Real-Time Truly-Coupled Lidar-Inertial Motion Correction and
Spatiotemporal Dynamic Object Detection},
author={Cedric Le Gentil, Raphael Falque, Teresa Vidal-Calleja},
journal={arXiv preprint arXiv:2410.05152},
year={2024},
archivePrefix={arXiv},
eprint={2410.05152},
primaryClass={cs.RO}
} | gentil2024real-time |
arxiv-666583 | 2410.05153 | Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement Learning Enabled Resource Allocation for Network Slicing | <|reference_start|>Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement Learning Enabled Resource Allocation for Network Slicing: Network slicing is a pivotal paradigm in wireless networks enabling customized services to users and applications. Yet, intelligent jamming attacks threaten the performance of network slicing. In this paper, we focus on the security aspect of network slicing over a deep transfer reinforcement learning (DTRL) enabled scenario. We first demonstrate how a deep reinforcement learning (DRL)-enabled jamming attack exposes potential risks. In particular, the attacker can intelligently jam resource blocks (RBs) reserved for slices by monitoring transmission signals and perturbing the assigned resources. Then, we propose a DRL-driven mitigation model to mitigate the intelligent attacker. Specifically, the defense mechanism generates interference on unallocated RBs where another antenna is used for transmitting powerful signals. This causes the jammer to consider these RBs as allocated RBs and generate interference for those instead of the allocated RBs. The analysis revealed that the intelligent DRL-enabled jamming attack caused a significant 50% degradation in network throughput and 60% increase in latency in comparison with the no-attack scenario. However, with the implemented mitigation measures, we observed 80% improvement in network throughput and 70% reduction in latency in comparison to the under-attack scenario.<|reference_end|> | arxiv | @article{salehi2024smart,
title={Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement
Learning Enabled Resource Allocation for Network Slicing},
author={Shavbo Salehi, Hao Zhou, Medhat Elsayed, Majid Bavand, Raimundas
Gaigalas, Yigit Ozcan, and Melike Erol-Kantarci},
journal={arXiv preprint arXiv:2410.05153},
year={2024},
doi={10.1109/TMLCN.2024.3470760},
archivePrefix={arXiv},
eprint={2410.05153},
primaryClass={cs.NI eess.SP}
} | salehi2024smart |
arxiv-666584 | 2410.05159 | MIBench: A Comprehensive Benchmark for Model Inversion Attack and Defense | <|reference_start|>MIBench: A Comprehensive Benchmark for Model Inversion Attack and Defense: Model Inversion (MI) attacks aim at leveraging the output information of target models to reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of Deep Neural Networks (DNNs). Unfortunately, in tandem with the rapid evolution of MI attacks, the lack of a comprehensive, aligned, and reliable benchmark has emerged as a formidable challenge. This deficiency leads to inadequate comparisons between different attack methods and inconsistent experimental setups. In this paper, we introduce the first practical benchmark for model inversion attacks and defenses to address this critical gap, which is named \textit{MIBench}. This benchmark serves as an extensible and reproducible modular-based toolbox and currently integrates a total of 16 state-of-the-art attack and defense methods. Moreover, we furnish a suite of assessment tools encompassing 9 commonly used evaluation protocols to facilitate standardized and fair evaluation and analysis. Capitalizing on this foundation, we conduct extensive experiments from multiple perspectives to holistically compare and analyze the performance of various methods across different scenarios, which overcomes the misalignment issues and discrepancy prevalent in previous works. Based on the collected attack methods and defense strategies, we analyze the impact of target resolution, defense robustness, model predictive power, model architectures, transferability and loss function. Our hope is that this \textit{MIBench} could provide a unified, practical and extensible toolbox and is widely utilized by researchers in the field to rigorously test and compare their novel methods, ensuring equitable evaluations and thereby propelling further advancements in the future development.<|reference_end|> | arxiv | @article{qiu2024mibench:,
title={MIBench: A Comprehensive Benchmark for Model Inversion Attack and
Defense},
author={Yixiang Qiu, Hongyao Yu, Hao Fang, Wenbo Yu, Bin Chen, Xuan Wang,
Shu-Tao Xia, Ke Xu},
journal={arXiv preprint arXiv:2410.05159},
year={2024},
archivePrefix={arXiv},
eprint={2410.05159},
primaryClass={cs.CV cs.CR}
} | qiu2024mibench: |
arxiv-666585 | 2410.05160 | VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks | <|reference_start|>VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks: Embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering. Recently, there has been a surge of interest in developing universal text embedding models that can generalize across tasks (e.g., MTEB). However, progress in learning universal multimodal embedding models has been relatively slow despite their importance. In this work, we aim to explore the potential for building universal embeddings capable of handling a wide range of downstream tasks. Our contributions are twofold: (1) MMEB (Massive Multimodal Embedding Benchmark), which covers 4 meta-tasks (i.e. classification, visual question answering, multimodal retrieval, and visual grounding) and 36 datasets, including 20 training and 16 evaluation datasets, and (2) VLM2Vec (Vision-Language Model -> Vector), a contrastive training framework that converts any state-of-the-art vision-language model into an embedding model via training on MMEB. Unlike previous models such as CLIP and BLIP, VLM2Vec can process any combination of images and text to generate a fixed-dimensional vector based on task instructions. We build a series of VLM2Vec models on Phi-3.5-V and evaluate them on MMEB's evaluation split. Our results show that \model achieves an absolute average improvement of 10% to 20% over existing multimodal embedding models on both in-distribution and out-of-distribution datasets in MMEB.<|reference_end|> | arxiv | @article{jiang2024vlm2vec:,
title={VLM2Vec: Training Vision-Language Models for Massive Multimodal
Embedding Tasks},
author={Ziyan Jiang, Rui Meng, Xinyi Yang, Semih Yavuz, Yingbo Zhou, Wenhu
Chen},
journal={arXiv preprint arXiv:2410.05160},
year={2024},
archivePrefix={arXiv},
eprint={2410.05160},
primaryClass={cs.CV cs.AI cs.CL}
} | jiang2024vlm2vec: |
arxiv-666586 | 2410.05161 | A Seesaw Model Attack Algorithm for Distributed Learning | <|reference_start|>A Seesaw Model Attack Algorithm for Distributed Learning: We investigate the Byzantine attack problem within the context of model training in distributed learning systems. While ensuring the convergence of current model training processes, common solvers (e.g. SGD, Adam, RMSProp, etc.) can be easily compromised by malicious nodes in these systems. Consequently, the training process may either converge slowly or even diverge. To develop effective secure distributed learning solvers, it is crucial to first examine attack methods to assess the robustness of these solvers. In this work, we contribute to the design of attack strategies by initially highlighting the limitations of finite-norm attacks. We then introduce the seesaw attack, which has been demonstrated to be more effective than the finite-norm attack. Through numerical experiments, we evaluate the efficacy of the seesaw attack across various gradient aggregation rules.<|reference_end|> | arxiv | @article{yang2024a,
title={A Seesaw Model Attack Algorithm for Distributed Learning},
author={Kun Yang,Tianyi Luo, Yanjie Dong, Aohan Li},
journal={arXiv preprint arXiv:2410.05161},
year={2024},
archivePrefix={arXiv},
eprint={2410.05161},
primaryClass={cs.DC}
} | yang2024a |
arxiv-666587 | 2410.05162 | Deciphering the Interplay of Parametric and Non-parametric Memory in Retrieval-augmented Language Models | <|reference_start|>Deciphering the Interplay of Parametric and Non-parametric Memory in Retrieval-augmented Language Models: Generative language models often struggle with specialized or less-discussed knowledge. A potential solution is found in Retrieval-Augmented Generation (RAG) models which act like retrieving information before generating responses. In this study, we explore how the \textsc{Atlas} approach, a RAG model, decides between what it already knows (parametric) and what it retrieves (non-parametric). We use causal mediation analysis and controlled experiments to examine how internal representations influence information processing. Our findings disentangle the effects of parametric knowledge and the retrieved context. They indicate that in cases where the model can choose between both types of information (parametric and non-parametric), it relies more on the context than the parametric knowledge. Furthermore, the analysis investigates the computations involved in \emph{how} the model uses the information from the context. We find that multiple mechanisms are active within the model and can be detected with mediation analysis: first, the decision of \emph{whether the context is relevant}, and second, how the encoder computes output representations to support copying when relevant.<|reference_end|> | arxiv | @article{farahani2024deciphering,
title={Deciphering the Interplay of Parametric and Non-parametric Memory in
Retrieval-augmented Language Models},
author={Mehrdad Farahani,Richard Johansson},
journal={arXiv preprint arXiv:2410.05162},
year={2024},
archivePrefix={arXiv},
eprint={2410.05162},
primaryClass={cs.CL}
} | farahani2024deciphering |
arxiv-666588 | 2410.05163 | A Simulation-Free Deep Learning Approach to Stochastic Optimal Control | <|reference_start|>A Simulation-Free Deep Learning Approach to Stochastic Optimal Control: We propose a simulation-free algorithm for the solution of generic problems in stochastic optimal control (SOC). Unlike existing methods, our approach does not require the solution of an adjoint problem, but rather leverages Girsanov theorem to directly calculate the gradient of the SOC objective on-policy. This allows us to speed up the optimization of control policies parameterized by neural networks since it completely avoids the expensive back-propagation step through stochastic differential equations (SDEs) used in the Neural SDE framework. In particular, it enables us to solve SOC problems in high dimension and on long time horizons. We demonstrate the efficiency of our approach in various domains of applications, including standard stochastic optimal control problems, sampling from unnormalized distributions via construction of a Schr\"odinger-F\"ollmer process, and fine-tuning of pre-trained diffusion models. In all cases our method is shown to outperform the existing methods in both the computing time and memory efficiency.<|reference_end|> | arxiv | @article{hua2024a,
title={A Simulation-Free Deep Learning Approach to Stochastic Optimal Control},
author={Mengjian Hua, Matthieu Lauri`ere, Eric Vanden-Eijnden},
journal={arXiv preprint arXiv:2410.05163},
year={2024},
archivePrefix={arXiv},
eprint={2410.05163},
primaryClass={cs.LG math.OC}
} | hua2024a |
arxiv-666589 | 2410.05164 | Union Bound Analysis for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM) With Channel Quantization | <|reference_start|>Union Bound Analysis for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM) With Channel Quantization: As an emerging non-volatile memory (NVM) technology, spin-torque transfer magnetic random access memory (STT-MRAM) has received great attention in recent years since it combines the features of low switching energy, fast write/read speed, and high scalability. However, process variation and thermal fluctuation severely affect the data integrity of STT-MRAM, resulting in both write errors and read errors. Therefore, effective error correction codes (ECCs) are necessary for correcting memory cell errors. Meanwhile, the design of channel quantizer plays a critical role in supporting error correction coding for STT-MRAM. In this work, we propose a union bound analysis which can accurately predict the word error rates (WERs) of ECCs with maximum-likelihood (ML) decoding over the quantized STT-MRAM channel. The derived bound provides a theoretical tool for comparing the performance of ECCs with different quantization schemes at very low error rate levels without resorting to lengthy computer simulations. Moreover, we also propose a new criterion to design the channel quantizer by minimizing the WERs of ECC decoding that are obtained from the union bound analysis. Numerical results show that the proposed union-bound-optimized (UBO) quantizer can achieve better error rate performance than the state-of-art quantizers for STT-MRAM.<|reference_end|> | arxiv | @article{zhong2024union,
title={Union Bound Analysis for Spin-Torque Transfer Magnetic Random Access
Memory (STT-MRAM) With Channel Quantization},
author={Xingwei Zhong, Kui Cai, and Guanghui Song},
journal={arXiv preprint arXiv:2410.05164},
year={2024},
archivePrefix={arXiv},
eprint={2410.05164},
primaryClass={cs.IT math.IT}
} | zhong2024union |
arxiv-666590 | 2410.05165 | Efficient Inference for Large Language Model-based Generative Recommendation | <|reference_start|>Efficient Inference for Large Language Model-based Generative Recommendation: Large Language Model (LLM)-based generative recommendation has achieved notable success, yet its practical deployment is costly particularly due to excessive inference latency caused by autoregressive decoding. For lossless LLM decoding acceleration, Speculative Decoding (SD) has emerged as a promising solution. However, applying SD to generative recommendation presents unique challenges due to the requirement of generating top-K items (i.e., K distinct token sequences) as a recommendation list by beam search. This leads to more stringent verification in SD, where all the top-K sequences from the target LLM must be successfully drafted by the draft model at each decoding step. To alleviate this, we consider 1) boosting top-K sequence alignment between the draft model and the target LLM, and 2) relaxing the verification strategy to reduce trivial LLM calls. To this end, we propose an alignment framework named AtSpeed, which presents the AtSpeed-S optimization objective for top-K alignment under the strict top-K verification. Moreover, we introduce a relaxed sampling verification strategy that allows high-probability non-top-K drafted sequences to be accepted, significantly reducing LLM calls. Correspondingly, we propose AtSpeed-R for top-K alignment under this relaxed sampling verification. Empirical results on two real-world datasets demonstrate that AtSpeed significantly accelerates LLM-based generative recommendation, e.g., near 2x speedup under strict top-K verification and up to 2.5 speedup under relaxed sampling verification. The codes and datasets will be released in the near future.<|reference_end|> | arxiv | @article{lin2024efficient,
title={Efficient Inference for Large Language Model-based Generative
Recommendation},
author={Xinyu Lin, Chaoqun Yang, Wenjie Wang, Yongqi Li, Cunxiao Du, Fuli
Feng, See-Kiong Ng, Tat-Seng Chua},
journal={arXiv preprint arXiv:2410.05165},
year={2024},
archivePrefix={arXiv},
eprint={2410.05165},
primaryClass={cs.IR cs.CL}
} | lin2024efficient |
arxiv-666591 | 2410.05167 | Presto! Distilling Steps and Layers for Accelerating Music Generation | <|reference_start|>Presto! Distilling Steps and Layers for Accelerating Music Generation: Despite advances in diffusion-based text-to-music (TTM) methods, efficient, high-quality generation remains a challenge. We introduce Presto!, an approach to inference acceleration for score-based diffusion transformers via reducing both sampling steps and cost per step. To reduce steps, we develop a new score-based distribution matching distillation (DMD) method for the EDM-family of diffusion models, the first GAN-based distillation method for TTM. To reduce the cost per step, we develop a simple, but powerful improvement to a recent layer distillation method that improves learning via better preserving hidden state variance. Finally, we combine our step and layer distillation methods together for a dual-faceted approach. We evaluate our step and layer distillation methods independently and show each yield best-in-class performance. Our combined distillation method can generate high-quality outputs with improved diversity, accelerating our base model by 10-18x (230/435ms latency for 32 second mono/stereo 44.1kHz, 15x faster than comparable SOTA) -- the fastest high-quality TTM to our knowledge. Sound examples can be found at https://presto-music.github.io/web/.<|reference_end|> | arxiv | @article{novack2024presto!,
title={Presto! Distilling Steps and Layers for Accelerating Music Generation},
author={Zachary Novack, Ge Zhu, Jonah Casebeer, Julian McAuley, Taylor
Berg-Kirkpatrick, Nicholas J. Bryan},
journal={arXiv preprint arXiv:2410.05167},
year={2024},
archivePrefix={arXiv},
eprint={2410.05167},
primaryClass={cs.SD cs.AI cs.LG eess.AS}
} | novack2024presto! |
arxiv-666592 | 2410.05168 | ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation | <|reference_start|>ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation: Reranking documents based on their relevance to a given query is critical in information retrieval. Traditional reranking methods often focus on improving the initial rankings but lack transparency, failing to explain why one document is ranked higher. In this paper, we introduce ReasoningRank, a novel reranking approach that enhances clarity by generating two types of reasoning: explicit reasoning, which explains how a document addresses the query, and comparison reasoning, which justifies the relevance of one document over another. We leverage large language models (LLMs) as teacher models to generate these explanations and distill this knowledge into smaller, more resource-efficient student models. While the student models may not outperform LLMs in speed, they significantly reduce the computational burden by requiring fewer resources, making them more suitable for large-scale or resource-constrained settings. These student models are trained to both generate meaningful reasoning and rerank documents, achieving competitive performance across multiple datasets, including MSMARCO and BRIGHT. Experiments demonstrate that ReasoningRank improves reranking accuracy and provides valuable insights into the decision-making process, offering a structured and interpretable solution for reranking tasks.<|reference_end|> | arxiv | @article{ji2024reasoningrank:,
title={ReasoningRank: Teaching Student Models to Rank through Reasoning-Based
Knowledge Distillation},
author={Yuelyu Ji, Zhuochun Li, Rui Meng, and Daqing He},
journal={arXiv preprint arXiv:2410.05168},
year={2024},
archivePrefix={arXiv},
eprint={2410.05168},
primaryClass={cs.CL}
} | ji2024reasoningrank: |
arxiv-666593 | 2410.05172 | Unlocking Potential: Integrating Multihop, CRC, and GRAND for Wireless 5G-Beyond/6G Networks | <|reference_start|>Unlocking Potential: Integrating Multihop, CRC, and GRAND for Wireless 5G-Beyond/6G Networks: As future wireless networks move towards millimeter wave (mmWave) and terahertz (THz) frequencies for 6G, multihop transmission using Integrated Access Backhaul (IABs) and Network-Controlled Repeaters (NCRs) will be highly essential to overcome coverage limitations. This paper examines the use of Guessing Random Additive Noise (GRAND) decoding for multihop transmissions in 3GPP networks. We explore two scenarios: one where only the destination uses GRAND decoding, and another where both relays and the destination leverage it. Interestingly, in the latter scenario, the Bit Error Rate (BER) curves for all hop counts intersect at a specific Signal-to-Noise Ratio (SNR), which we term the GRAND barrier. This finding offers valuable insights for future research and 3GPP standard development. Simulations confirm the effectiveness of GRAND in improving communication speed and quality, contributing to the robustness and interconnectivity of future wireless systems, particularly relevant for the migration towards mmWave and THz bands in 6G networks. Finally, we investigate the integration of multihop transmission, CRC detection, and GRAND decoding within 3GPP networks, demonstrating their potential to overcome coverage limitations and enhance overall network performance.<|reference_end|> | arxiv | @article{bozkurt2024unlocking,
title={Unlocking Potential: Integrating Multihop, CRC, and GRAND for Wireless
5G-Beyond/6G Networks},
author={Bora Bozkurt, Emirhan Zor and Ferkan Yilmaz},
journal={Publication ID: 979-8-3503-8481-9/24/$31.00 \c{opyright}2024 IEEE},
year={2024},
archivePrefix={arXiv},
eprint={2410.05172},
primaryClass={cs.IT math.IT}
} | bozkurt2024unlocking |
arxiv-666594 | 2410.05173 | Provably Positivity-Preserving Constrained Transport (PPCT) Second-Order Scheme for Ideal Magnetohydrodynamics | <|reference_start|>Provably Positivity-Preserving Constrained Transport (PPCT) Second-Order Scheme for Ideal Magnetohydrodynamics: This paper proposes and analyzes a robust and efficient second-order positivity-preserving constrained transport (PPCT) scheme for ideal magnetohydrodynamics (MHD) on non-staggered Cartesian meshes. The PPCT scheme ensures two critical physical constraints: a globally discrete divergence-free (DDF) condition on the magnetic field and the positivity of density and pressure. The method is inspired by a novel splitting technique from [T.A. Dao, M. Nazarov and I. Tomas, J. Comput. Phys., 508:113009, 2024], which divides the MHD system into an Euler subsystem with steady magnetic fields and a magnetic subsystem with steady density and internal energy. To achieve these structure-preserving properties, the PPCT scheme combines a positivity-preserving (PP) finite volume method for the Euler subsystem with a finite difference constrained transport (CT) method for the magnetic subsystem via Strang splitting. The finite volume method employs a new PP limiter that retains second-order accuracy and enforces the positivity of density and pressure, with rigorous proof provided using the geometric quasilinearization (GQL) approach [K. Wu and C.-W. Shu, SIAM Review, 65:1031-1073, 2023]. For the magnetic subsystem, we develop an implicit finite difference CT method that conserves energy and maintains a globally DDF constraint. This nonlinear system is efficiently solved to machine precision using an iterative algorithm. Since the CT method is unconditionally energy-stable and conserves steady density and internal energy, the PPCT scheme requires only a mild CFL condition for the finite volume method to ensure stability and the PP property. While the focus is on 2D cases for clarity, the extension to 3D is discussed. Several challenging numerical experiments, including highly magnetized MHD jets with high Mach numbers, validate the PPCT scheme's accuracy, robustness, and high resolution.<|reference_end|> | arxiv | @article{pang2024provably,
title={Provably Positivity-Preserving Constrained Transport (PPCT) Second-Order
Scheme for Ideal Magnetohydrodynamics},
author={Dongwen Pang, Kailiang Wu},
journal={arXiv preprint arXiv:2410.05173},
year={2024},
archivePrefix={arXiv},
eprint={2410.05173},
primaryClass={math.NA cs.NA physics.comp-ph physics.flu-dyn physics.plasm-ph physics.space-ph}
} | pang2024provably |
arxiv-666595 | 2410.05174 | Deep-Learning-Based Adaptive Error-Correction Decoding for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM) | <|reference_start|>Deep-Learning-Based Adaptive Error-Correction Decoding for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM): Spin-torque transfer magnetic random access memory (STT-MRAM) is a promising emerging non-volatile memory (NVM) technology with wide applications. However, the data recovery of STT-MRAM is affected by the diversity of channel raw bit error rate (BER) across different dies caused by process variations, as well as the unknown resistance offset due to temperature change. Therefore, it is critical to develop effective decoding algorithms of error correction codes (ECCs) for STT-MRAM. In this article, we first propose a neural bit-flipping (BF) decoding algorithm, which can share the same trellis representation as the state-of-the-art neural decoding algorithms, such as the neural belief propagation (NBP) and neural offset min-sum (NOMS) algorithm. Hence, a neural network (NN) decoder with a uniform architecture but different NN parameters can realize all these neural decoding algorithms. Based on such a unified NN decoder architecture, we further propose a novel deep-learning (DL)-based adaptive decoding algorithm whose decoding complexity can be adjusted according to the change of the channel conditions of STT-MRAM. Extensive experimental evaluation results demonstrate that the proposed neural decoders can greatly improve the performance over the standard decoders, with similar decoding latency and energy consumption. Moreover, the DL-based adaptive decoder can work well over different channel conditions of STT-MRAM irrespective of the unknown resistance offset, with a 50% reduction of the decoding latency and energy consumption compared to the fixed decoder.<|reference_end|> | arxiv | @article{zhong2024deep-learning-based,
title={Deep-Learning-Based Adaptive Error-Correction Decoding for Spin-Torque
Transfer Magnetic Random Access Memory (STT-MRAM)},
author={Xingwei Zhong, Kui Cai, Peng Kang, Guanghui Song, and Bin Dai},
journal={arXiv preprint arXiv:2410.05174},
year={2024},
archivePrefix={arXiv},
eprint={2410.05174},
primaryClass={cs.IT eess.SP math.IT}
} | zhong2024deep-learning-based |
arxiv-666596 | 2410.05175 | Avoiding Deadlocks via Weak Deadlock Sets | <|reference_start|>Avoiding Deadlocks via Weak Deadlock Sets: A deadlock occurs in a network when two or more items prevent each other from moving and are stalled. In a general model, items are stored at vertices and each vertex $v$ has a buffer with $b(v)$ slots. Given a route for each item toward its destination, the Deadlock Safety Problem asks whether the current state is safe, i.e., it is possible to deliver each item at its destination, or is bound to deadlock, i.e., any sequence of moves will end up with a set of items stalled. While when $b \geq 2$ the problem is solvable in polynomial time building upon a nice characterization of YES/NO-instances, it is NP-hard on quite simple graphs as grids when $b=1$ and on trees when $b\leq 3$. We improve on these results by means of two new tools, weak deadlock sets and wise states. We show that for general networks and $b$ a state that is wise and without weak deadlock sets -- this can be recognized in polynomial time -- is safe: this is indeed a strengthening of the result for $b\geq 2$. We sharpen this result for trees, where we show that a wise state is safe if and only if it has no weak deadlock set. That is interesting in particular in the context of rail transportation where networks are often single-tracked and deadlock detection and avoidance focuses on local sub-networks, mostly with a tree-like structure. We pose some research questions for future investigations.<|reference_end|> | arxiv | @article{oriolo2024avoiding,
title={Avoiding Deadlocks via Weak Deadlock Sets},
author={Gianpaolo Oriolo, Anna Russo Russo},
journal={arXiv preprint arXiv:2410.05175},
year={2024},
archivePrefix={arXiv},
eprint={2410.05175},
primaryClass={math.OC cs.CC}
} | oriolo2024avoiding |
arxiv-666597 | 2410.05177 | Are causal effect estimations enough for optimal recommendations under multitreatment scenarios? | <|reference_start|>Are causal effect estimations enough for optimal recommendations under multitreatment scenarios?: When making treatment selection decisions, it is essential to include a causal effect estimation analysis to compare potential outcomes under different treatments or controls, assisting in optimal selection. However, merely estimating individual treatment effects may not suffice for truly optimal decisions. Our study addressed this issue by incorporating additional criteria, such as the estimations' uncertainty, measured by the conditional value-at-risk, commonly used in portfolio and insurance management. For continuous outcomes observable before and after treatment, we incorporated a specific prediction condition. We prioritized treatments that could yield optimal treatment effect results and lead to post-treatment outcomes more desirable than pretreatment levels, with the latter condition being called the prediction criterion. With these considerations, we propose a comprehensive methodology for multitreatment selection. Our approach ensures satisfaction of the overlap assumption, crucial for comparing outcomes for treated and control groups, by training propensity score models as a preliminary step before employing traditional causal models. To illustrate a practical application of our methodology, we applied it to the credit card limit adjustment problem. Analyzing a fintech company's historical data, we found that relying solely on counterfactual predictions was inadequate for appropriate credit line modifications. Incorporating our proposed additional criteria significantly enhanced policy performance.<|reference_end|> | arxiv | @article{alfonso-sánchez2024are,
title={Are causal effect estimations enough for optimal recommendations under
multitreatment scenarios?},
author={Sherly Alfonso-S'anchez, Kristina P. Sendova and Cristi'an Bravo},
journal={arXiv preprint arXiv:2410.05177},
year={2024},
archivePrefix={arXiv},
eprint={2410.05177},
primaryClass={stat.ML cs.LG}
} | alfonso-sánchez2024are |
arxiv-666598 | 2410.05180 | Enhancing Equity in Large Language Models for Medical Applications | <|reference_start|>Enhancing Equity in Large Language Models for Medical Applications: Recent advancements have highlighted the potential of large language models (LLMs) in medical applications, notably in automating Clinical Trial Matching for translational research and providing medical question-answering for clinical decision support. However, our study reveals significant inequities in the use of LLMs, particularly for individuals from specific racial, gender, and underrepresented groups influenced by social determinants of health. These disparities could worsen existing health inequities if LLMs are broadly adopted in healthcare. To address this, we propose and evaluate a novel framework, EquityGuard, designed to detect and mitigate biases in LLM-based medical applications. EquityGuard incorporates a Bias Detection Mechanism capable of identifying and correcting unfair predictions, thus enhancing outcomes and promoting equity across diverse population groups.<|reference_end|> | arxiv | @article{ji2024mitigating,
title={Mitigating the Risk of Health Inequity Exacerbated by Large Language
Models},
author={Yuelyu Ji, Wenhe Ma, Sonish Sivarajkumar, Hang Zhang, Eugene Mathew
Sadhu, Zhuochun Li, Xizhi Wu, Shyam Visweswaran, and Yanshan Wang},
journal={arXiv preprint arXiv:2410.05180},
year={2024},
archivePrefix={arXiv},
eprint={2410.05180},
primaryClass={cs.CL}
} | ji2024mitigating |
arxiv-666599 | 2410.05182 | MARs: Multi-view Attention Regularizations for Patch-based Feature Recognition of Space Terrain | <|reference_start|>MARs: Multi-view Attention Regularizations for Patch-based Feature Recognition of Space Terrain: The visual detection and tracking of surface terrain is required for spacecraft to safely land on or navigate within close proximity to celestial objects. Current approaches rely on template matching with pre-gathered patch-based features, which are expensive to obtain and a limiting factor in perceptual capability. While recent literature has focused on in-situ detection methods to enhance navigation and operational autonomy, robust description is still needed. In this work, we explore metric learning as the lightweight feature description mechanism and find that current solutions fail to address inter-class similarity and multi-view observational geometry. We attribute this to the view-unaware attention mechanism and introduce Multi-view Attention Regularizations (MARs) to constrain the channel and spatial attention across multiple feature views, regularizing the what and where of attention focus. We thoroughly analyze many modern metric learning losses with and without MARs and demonstrate improved terrain-feature recognition performance by upwards of 85%. We additionally introduce the Luna-1 dataset, consisting of Moon crater landmarks and reference navigation frames from NASA mission data to support future research in this difficult task. Luna-1 and source code are publicly available at https://droneslab.github.io/mars/.<|reference_end|> | arxiv | @article{chase2024mars:,
title={MARs: Multi-view Attention Regularizations for Patch-based Feature
Recognition of Space Terrain},
author={Timothy Chase Jr, Karthik Dantu},
journal={arXiv preprint arXiv:2410.05182},
year={2024},
archivePrefix={arXiv},
eprint={2410.05182},
primaryClass={cs.CV cs.AI cs.LG cs.RO}
} | chase2024mars: |
arxiv-666600 | 2410.05183 | Beyond Correlation: Interpretable Evaluation of Machine Translation Metrics | <|reference_start|>Beyond Correlation: Interpretable Evaluation of Machine Translation Metrics: Machine Translation (MT) evaluation metrics assess translation quality automatically. Recently, researchers have employed MT metrics for various new use cases, such as data filtering and translation re-ranking. However, most MT metrics return assessments as scalar scores that are difficult to interpret, posing a challenge to making informed design choices. Moreover, MT metrics' capabilities have historically been evaluated using correlation with human judgment, which, despite its efficacy, falls short of providing intuitive insights into metric performance, especially in terms of new metric use cases. To address these issues, we introduce an interpretable evaluation framework for MT metrics. Within this framework, we evaluate metrics in two scenarios that serve as proxies for the data filtering and translation re-ranking use cases. Furthermore, by measuring the performance of MT metrics using Precision, Recall, and F-score, we offer clearer insights into their capabilities than correlation with human judgments. Finally, we raise concerns regarding the reliability of manually curated data following the Direct Assessments+Scalar Quality Metrics (DA+SQM) guidelines, reporting a notably low agreement with Multidimensional Quality Metrics (MQM) annotations.<|reference_end|> | arxiv | @article{perrella2024beyond,
title={Beyond Correlation: Interpretable Evaluation of Machine Translation
Metrics},
author={Stefano Perrella, Lorenzo Proietti, Pere-Llu'is Huguet Cabot, Edoardo
Barba, Roberto Navigli},
journal={arXiv preprint arXiv:2410.05183},
year={2024},
archivePrefix={arXiv},
eprint={2410.05183},
primaryClass={cs.CL cs.AI}
} | perrella2024beyond |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.