corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-667801 | 2410.07414 | Bayes-Nash Generative Privacy Protection Against Membership Inference Attacks | <|reference_start|>Bayes-Nash Generative Privacy Protection Against Membership Inference Attacks: An ability to share data, even in aggregated form, is critical to advancing both conventional and data science. However, insofar as such datasets are comprised of individuals, their membership in these datasets is often viewed as sensitive, with membership inference attacks (MIAs) threatening to violate their privacy. We propose a Bayesian game model for privacy-preserving publishing of data-sharing mechanism outputs (for example, summary statistics for sharing genomic data). In this game, the defender minimizes a combination of expected utility and privacy loss, with the latter being maximized by a Bayes-rational attacker. We propose a GAN-style algorithm to approximate a Bayes-Nash equilibrium of this game, and introduce the notions of Bayes-Nash generative privacy (BNGP) and Bayes generative privacy (BGP) risk that aims to optimally balance the defender's privacy and utility in a way that is robust to the attacker's heterogeneous preferences with respect to true and false positives. We demonstrate the properties of composition and post-processing for BGP risk and establish conditions under which BNGP and pure differential privacy (PDP) are equivalent. We apply our method to sharing summary statistics, where MIAs can re-identify individuals even from aggregated data. Theoretical analysis and empirical results demonstrate that our Bayesian game-theoretic method outperforms state-of-the-art approaches for privacy-preserving sharing of summary statistics.<|reference_end|> | arxiv | @article{zhang2024bayes-nash,
title={Bayes-Nash Generative Privacy Protection Against Membership Inference
Attacks},
author={Tao Zhang, Rajagopal Venkatesaraman, Rajat K. De, Bradley A. Malin,
Yevgeniy Vorobeychik},
journal={arXiv preprint arXiv:2410.07414},
year={2024},
archivePrefix={arXiv},
eprint={2410.07414},
primaryClass={cs.CR}
} | zhang2024bayes-nash |
arxiv-667802 | 2410.07415 | 3D2M Dataset: A 3-Dimension diverse Mesh Dataset | <|reference_start|>3D2M Dataset: A 3-Dimension diverse Mesh Dataset: Three-dimensional (3D) reconstruction has emerged as a prominent area of research, attracting significant attention from academia and industry alike. Among the various applications of 3D reconstruction, facial reconstruction poses some of the most formidable challenges. Additionally, each individuals facial structure is unique, requiring algorithms to be robust enough to handle this variability while maintaining fidelity to the original features. This article presents a comprehensive dataset of 3D meshes featuring a diverse range of facial structures and corresponding facial landmarks. The dataset comprises 188 3D facial meshes, including 73 from female candidates and 114 from male candidates. It encompasses a broad representation of ethnic backgrounds, with contributions from 45 different ethnicities, ensuring a rich diversity in facial characteristics. Each facial mesh is accompanied by key points that accurately annotate the relevant features, facilitating precise analysis and manipulation. This dataset is particularly valuable for applications such as facial re targeting, the study of facial structure components, and real-time person representation in video streams. By providing a robust resource for researchers and developers, it aims to advance the field of 3D facial reconstruction and related technologies.<|reference_end|> | arxiv | @article{dasgupta20243d2m,
title={3D2M Dataset: A 3-Dimension diverse Mesh Dataset},
author={Sankarshan Dasgupta},
journal={arXiv preprint arXiv:2410.07415},
year={2024},
archivePrefix={arXiv},
eprint={2410.07415},
primaryClass={cs.CV cs.MM}
} | dasgupta20243d2m |
arxiv-667803 | 2410.07418 | NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest | <|reference_start|>NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest: Forest mapping provides critical observational data needed to understand the dynamics of forest environments. Notably, tree diameter at breast height (DBH) is a metric used to estimate forest biomass and carbon dioxide (CO$_2$) sequestration. Manual methods of forest mapping are labor intensive and time consuming, a bottleneck for large-scale mapping efforts. Automated mapping relies on acquiring dense forest reconstructions, typically in the form of point clouds. Terrestrial laser scanning (TLS) and mobile laser scanning (MLS) generate point clouds using expensive LiDAR sensing, and have been used successfully to estimate tree diameter. Neural radiance fields (NeRFs) are an emergent technology enabling photorealistic, vision-based reconstruction by training a neural network on a sparse set of input views. In this paper, we present a comparison of MLS and NeRF forest reconstructions for the purpose of trunk diameter estimation in a mixed-evergreen Redwood forest. In addition, we propose an improved DBH-estimation method using convex-hull modeling. Using this approach, we achieved 1.68 cm RMSE, which consistently outperformed standard cylinder modeling approaches. Our code contributions and forest datasets are freely available at https://github.com/harelab-ucsc/RedwoodNeRF.<|reference_end|> | arxiv | @article{korycki2024nerf-accelerated,
title={NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest},
author={Adam Korycki, Cory Yeaton, Gregory S. Gilbert, Colleen Josephson,
Steve McGuire},
journal={arXiv preprint arXiv:2410.07418},
year={2024},
archivePrefix={arXiv},
eprint={2410.07418},
primaryClass={cs.CV cs.RO}
} | korycki2024nerf-accelerated |
arxiv-667804 | 2410.07419 | Reconfigurations of Plane Caterpillars and Paths | <|reference_start|>Reconfigurations of Plane Caterpillars and Paths: Let $S$ be a point set in the plane, $\mathcal{P}(S)$ and $\mathcal{C}(S)$ sets of all plane spanning paths and caterpillars on $S$. We study reconfiguration operations on $\mathcal{P}(S)$ and $\mathcal{C}(S)$. In particular, we prove that all of the commonly studied reconfigurations on plane spanning trees still yield connected reconfiguration graphs for caterpillars when $S$ is in convex position. If $S$ is in general position, we show that the rotation, compatible flip and flip graphs of $\mathcal{C}(S)$ are connected while the slide graph is disconnected. For paths, we prove the existence of a connected component of size at least $2^{n-1}$ and that no component of size at most $7$ can exist in the flip graph on $\mathcal{P}(S)$.<|reference_end|> | arxiv | @article{antić2024reconfigurations,
title={Reconfigurations of Plane Caterpillars and Paths},
author={Todor Anti'c, Guillermo Gamboa Quintero, Jelena Gliv{s}i'c},
journal={arXiv preprint arXiv:2410.07419},
year={2024},
archivePrefix={arXiv},
eprint={2410.07419},
primaryClass={math.CO cs.CG}
} | antić2024reconfigurations |
arxiv-667805 | 2410.07421 | Segmenting objects with Bayesian fusion of active contour models and convnet priors | <|reference_start|>Segmenting objects with Bayesian fusion of active contour models and convnet priors: Instance segmentation is a core computer vision task with great practical significance. Recent advances, driven by large-scale benchmark datasets, have yielded good general-purpose Convolutional Neural Network (CNN)-based methods. Natural Resource Monitoring (NRM) utilizes remote sensing imagery with generally known scale and containing multiple overlapping instances of the same class, wherein the object contours are jagged and highly irregular. This is in stark contrast with the regular man-made objects found in classic benchmark datasets. We address this problem and propose a novel instance segmentation method geared towards NRM imagery. We formulate the problem as Bayesian maximum a posteriori inference which, in learning the individual object contours, incorporates shape, location, and position priors from state-of-the-art CNN architectures, driving a simultaneous level-set evolution of multiple object contours. We employ loose coupling between the CNNs that supply the priors and the active contour process, allowing a drop-in replacement of new network architectures. Moreover, we introduce a novel prior for contour shape, namely, a class of Deep Shape Models based on architectures from Generative Adversarial Networks (GANs). These Deep Shape Models are in essence a non-linear generalization of the classic Eigenshape formulation. In experiments, we tackle the challenging, real-world problem of segmenting individual dead tree crowns and delineating precise contours. We compare our method to two leading general-purpose instance segmentation methods - Mask R-CNN and K-net - on color infrared aerial imagery. Results show our approach to significantly outperform both methods in terms of reconstruction quality of tree crown contours. Furthermore, use of the GAN-based deep shape model prior yields significant improvement of all results over the vanilla Eigenshape prior.<|reference_end|> | arxiv | @article{polewski2024segmenting,
title={Segmenting objects with Bayesian fusion of active contour models and
convnet priors},
author={Przemyslaw Polewski, Jacquelyn Shelton, Wei Yao and Marco Heurich},
journal={arXiv preprint arXiv:2410.07421},
year={2024},
archivePrefix={arXiv},
eprint={2410.07421},
primaryClass={cs.CV}
} | polewski2024segmenting |
arxiv-667806 | 2410.07422 | Understanding User Needs for Injury Recovery with Augmented Reality | <|reference_start|>Understanding User Needs for Injury Recovery with Augmented Reality: Physical therapy (PT) plays a crucial role in muscle injury recovery, but people struggle to adhere to and perform PT exercises correctly from home. To support challenges faced with in-home PT, augmented reality (AR) holds promise in enhancing patient's engagement and accuracy through immersive interactive visualizations. However, effectively leveraging AR requires a better understanding of patient needs during injury recovery. Through interviews with six individuals undergoing physical therapy, this paper introduces user-centered design considerations integrating AR and body motion data to enhance in-home PT for injury recovery. Our findings identify key challenges and propose design variables for future body-based visualizations of body motion data for PT.<|reference_end|> | arxiv | @article{kandel2024understanding,
title={Understanding User Needs for Injury Recovery with Augmented Reality},
author={Jade Kandel, Sriya Kasumarthi, Danielle Albers Szafir},
journal={arXiv preprint arXiv:2410.07422},
year={2024},
archivePrefix={arXiv},
eprint={2410.07422},
primaryClass={cs.HC}
} | kandel2024understanding |
arxiv-667807 | 2410.07426 | CAFEEN: A Cooperative Approach for Energy Efficient NoCs with Multi-Agent Reinforcement Learning | <|reference_start|>CAFEEN: A Cooperative Approach for Energy Efficient NoCs with Multi-Agent Reinforcement Learning: In emerging high-performance Network-on-Chip (NoC) architectures, efficient power management is crucial to minimize energy consumption. We propose a novel framework called CAFEEN that employs both heuristic-based fine-grained and machine learning-based coarse-grained power-gating for energy-efficient NoCs. CAFEEN uses a fine-grained method to activate only essential NoC buffers during lower network loads. It switches to a coarse-grained method at peak loads to minimize compounding wake-up overhead using multi-agent reinforcement learning. Results show that CAFEEN adaptively balances power-efficiency with performance, reducing total energy by 2.60x for single application workloads and 4.37x for multi-application workloads, compared to state-of-the-art NoC power-gating frameworks.<|reference_end|> | arxiv | @article{khan2024cafeen:,
title={CAFEEN: A Cooperative Approach for Energy Efficient NoCs with
Multi-Agent Reinforcement Learning},
author={Kamil Khan, Sudeep Pasricha},
journal={arXiv preprint arXiv:2410.07426},
year={2024},
archivePrefix={arXiv},
eprint={2410.07426},
primaryClass={cs.LG cs.AI cs.AR}
} | khan2024cafeen: |
arxiv-667808 | 2410.07427 | A Generalization Bound for a Family of Implicit Networks | <|reference_start|>A Generalization Bound for a Family of Implicit Networks: Implicit networks are a class of neural networks whose outputs are defined by the fixed point of a parameterized operator. They have enjoyed success in many applications including natural language processing, image processing, and numerous other applications. While they have found abundant empirical success, theoretical work on its generalization is still under-explored. In this work, we consider a large family of implicit networks defined parameterized contractive fixed point operators. We show a generalization bound for this class based on a covering number argument for the Rademacher complexity of these architectures.<|reference_end|> | arxiv | @article{fung2024a,
title={A Generalization Bound for a Family of Implicit Networks},
author={Samy Wu Fung and Benjamin Berkels},
journal={arXiv preprint arXiv:2410.07427},
year={2024},
archivePrefix={arXiv},
eprint={2410.07427},
primaryClass={cs.LG stat.ML}
} | fung2024a |
arxiv-667809 | 2410.07428 | The First VoicePrivacy Attacker Challenge Evaluation Plan | <|reference_start|>The First VoicePrivacy Attacker Challenge Evaluation Plan: The First VoicePrivacy Attacker Challenge is a new kind of challenge organized as part of the VoicePrivacy initiative and supported by ICASSP 2025 as the SP Grand Challenge It focuses on developing attacker systems against voice anonymization, which will be evaluated against a set of anonymization systems submitted to the VoicePrivacy 2024 Challenge. Training, development, and evaluation datasets are provided along with a baseline attacker system. Participants shall develop their attacker systems in the form of automatic speaker verification systems and submit their scores on the development and evaluation data to the organizers. To do so, they can use any additional training data and models, provided that they are openly available and declared before the specified deadline. The metric for evaluation is equal error rate (EER). Results will be presented at the ICASSP 2025 special session to which 5 selected top-ranked participants will be invited to submit and present their challenge systems.<|reference_end|> | arxiv | @article{tomashenko2024the,
title={The First VoicePrivacy Attacker Challenge Evaluation Plan},
author={Natalia Tomashenko, Xiaoxiao Miao, Emmanuel Vincent, Junichi Yamagishi},
journal={arXiv preprint arXiv:2410.07428},
year={2024},
archivePrefix={arXiv},
eprint={2410.07428},
primaryClass={eess.AS cs.CL cs.CR}
} | tomashenko2024the |
arxiv-667810 | 2410.07430 | EventFlow: Forecasting Continuous-Time Event Data with Flow Matching | <|reference_start|>EventFlow: Forecasting Continuous-Time Event Data with Flow Matching: Continuous-time event sequences, in which events occur at irregular intervals, are ubiquitous across a wide range of industrial and scientific domains. The contemporary modeling paradigm is to treat such data as realizations of a temporal point process, and in machine learning it is common to model temporal point processes in an autoregressive fashion using a neural network. While autoregressive models are successful in predicting the time of a single subsequent event, their performance can be unsatisfactory in forecasting longer horizons due to cascading errors. We propose EventFlow, a non-autoregressive generative model for temporal point processes. Our model builds on the flow matching framework in order to directly learn joint distributions over event times, side-stepping the autoregressive process. EventFlow is likelihood-free, easy to implement and sample from, and either matches or surpasses the performance of state-of-the-art models in both unconditional and conditional generation tasks on a set of standard benchmarks<|reference_end|> | arxiv | @article{kerrigan2024eventflow:,
title={EventFlow: Forecasting Continuous-Time Event Data with Flow Matching},
author={Gavin Kerrigan, Kai Nelson, Padhraic Smyth},
journal={arXiv preprint arXiv:2410.07430},
year={2024},
archivePrefix={arXiv},
eprint={2410.07430},
primaryClass={cs.LG stat.ML}
} | kerrigan2024eventflow: |
arxiv-667811 | 2410.07431 | Goal-oriented vessel detection with distributed computing in a LEO satellite constellation | <|reference_start|>Goal-oriented vessel detection with distributed computing in a LEO satellite constellation: Earth Observation (EO) has traditionally involved the transmission of a large volume of raw data to map the Earth surface. This results in congestion to the satellite network and delays in the availability of the results, invalidating the approach for timing-sensitive applications. Instead, the computation resources at the satellites can be used as an edge layer for compressing the data and/or doing inferences. In this paper, we investigate satellite edge computing for vessel detection with a LEO satellite constellation. First, we distribute the computation and inference load among the neighbouring satellites of the one taking the images, based on the VHRShips data set and YOLOv8. This semantic and fragmented information is then routed to a remote ground monitor through the whole constellation. The average and peak Age of Information (AoI) are reformulated to measure the freshness of the aggregated information at the receiver in this image-capture scenario. We then dimension the network (number of orbital planes and satellites per orbital plane) for a given target age and covered area that quantify the level of achievement of the task. The results show that 20 orbital planes with 20 satellites are necessary to keep the peak AoI below 60 seconds with a compression ratio > 23000, i.e., a size reduction of 99.996%, and for a approximately 100% probability of coverage.<|reference_end|> | arxiv | @article{mercado-martínez2024goal-oriented,
title={Goal-oriented vessel detection with distributed computing in a LEO
satellite constellation},
author={Antonio M. Mercado-Mart'inez, Beatriz Soret, Antonio Jurado-Navas},
journal={arXiv preprint arXiv:2410.07431},
year={2024},
archivePrefix={arXiv},
eprint={2410.07431},
primaryClass={cs.NI astro-ph.IM}
} | mercado-martínez2024goal-oriented |
arxiv-667812 | 2410.07432 | Can Transformers Reason Logically? A Study in SAT Solving | <|reference_start|>Can Transformers Reason Logically? A Study in SAT Solving: We theoretically and empirically study the logical reasoning capabilities of LLMs in the context of the Boolean satisfiability (SAT) problem. First, we construct a decoder-only Transformer that can solve SAT using backtracking and deduction via Chain-of-Thought (CoT). We prove its correctness by showing trace equivalence to the well-known DPLL SAT-solving algorithm. Second, to support the implementation of this abstract construction, we design a compiler $\texttt{PARAT}$ that takes as input a procedural specification and outputs a transformer model implementing this specification. Third, rather than $\textit{programming}$ a transformer to reason, we evaluate empirically whether it can be $\textit{trained}$ to do so by learning directly from algorithmic traces ("reasoning paths") of the DPLL algorithm.<|reference_end|> | arxiv | @article{pan2024can,
title={Can Transformers Reason Logically? A Study in SAT Solving},
author={Leyan Pan, Vijay Ganesh, Jacob Abernethy, Chris Esposo, Wenke Lee},
journal={arXiv preprint arXiv:2410.07432},
year={2024},
archivePrefix={arXiv},
eprint={2410.07432},
primaryClass={cs.LG cs.AI cs.LO}
} | pan2024can |
arxiv-667813 | 2410.07434 | Surgical Depth Anything: Depth Estimation for Surgical Scenes using Foundation Models | <|reference_start|>Surgical Depth Anything: Depth Estimation for Surgical Scenes using Foundation Models: Monocular depth estimation is crucial for tracking and reconstruction algorithms, particularly in the context of surgical videos. However, the inherent challenges in directly obtaining ground truth depth maps during surgery render supervised learning approaches impractical. While many self-supervised methods based on Structure from Motion (SfM) have shown promising results, they rely heavily on high-quality camera motion and require optimization on a per-patient basis. These limitations can be mitigated by leveraging the current state-of-the-art foundational model for depth estimation, Depth Anything. However, when directly applied to surgical scenes, Depth Anything struggles with issues such as blurring, bleeding, and reflections, resulting in suboptimal performance. This paper presents a fine-tuning of the Depth Anything model specifically for the surgical domain, aiming to deliver more accurate pixel-wise depth maps tailored to the unique requirements and challenges of surgical environments. Our fine-tuning approach significantly improves the model's performance in surgical scenes, reducing errors related to blurring and reflections, and achieving a more reliable and precise depth estimation.<|reference_end|> | arxiv | @article{lou2024surgical,
title={Surgical Depth Anything: Depth Estimation for Surgical Scenes using
Foundation Models},
author={Ange Lou, Yamin Li, Yike Zhang and Jack Noble},
journal={arXiv preprint arXiv:2410.07434},
year={2024},
archivePrefix={arXiv},
eprint={2410.07434},
primaryClass={cs.CV}
} | lou2024surgical |
arxiv-667814 | 2410.07436 | Toward Robust Real-World Audio Deepfake Detection: Closing the Explainability Gap | <|reference_start|>Toward Robust Real-World Audio Deepfake Detection: Closing the Explainability Gap: The rapid proliferation of AI-manipulated or generated audio deepfakes poses serious challenges to media integrity and election security. Current AI-driven detection solutions lack explainability and underperform in real-world settings. In this paper, we introduce novel explainability methods for state-of-the-art transformer-based audio deepfake detectors and open-source a novel benchmark for real-world generalizability. By narrowing the explainability gap between transformer-based audio deepfake detectors and traditional methods, our results not only build trust with human experts, but also pave the way for unlocking the potential of citizen intelligence to overcome the scalability issue in audio deepfake detection.<|reference_end|> | arxiv | @article{channing2024toward,
title={Toward Robust Real-World Audio Deepfake Detection: Closing the
Explainability Gap},
author={Georgia Channing, Juil Sock, Ronald Clark, Philip Torr, Christian
Schroeder de Witt},
journal={arXiv preprint arXiv:2410.07436},
year={2024},
archivePrefix={arXiv},
eprint={2410.07436},
primaryClass={cs.LG cs.SD eess.AS}
} | channing2024toward |
arxiv-667815 | 2410.07437 | Robust infrared small target detection using self-supervised and a contrario paradigms | <|reference_start|>Robust infrared small target detection using self-supervised and a contrario paradigms: Detecting small targets in infrared images poses significant challenges in defense applications due to the presence of complex backgrounds and the small size of the targets. Traditional object detection methods often struggle to balance high detection rates with low false alarm rates, especially when dealing with small objects. In this paper, we introduce a novel approach that combines a contrario paradigm with Self-Supervised Learning (SSL) to improve Infrared Small Target Detection (IRSTD). On the one hand, the integration of an a contrario criterion into a YOLO detection head enhances feature map responses for small and unexpected objects while effectively controlling false alarms. On the other hand, we explore SSL techniques to overcome the challenges of limited annotated data, common in IRSTD tasks. Specifically, we benchmark several representative SSL strategies for their effectiveness in improving small object detection performance. Our findings show that instance discrimination methods outperform masked image modeling strategies when applied to YOLO-based small object detection. Moreover, the combination of the a contrario and SSL paradigms leads to significant performance improvements, narrowing the gap with state-of-the-art segmentation methods and even outperforming them in frugal settings. This two-pronged approach offers a robust solution for improving IRSTD performance, particularly under challenging conditions.<|reference_end|> | arxiv | @article{ciocarlan2024robust,
title={Robust infrared small target detection using self-supervised and a
contrario paradigms},
author={Alina Ciocarlan, Sylvie Le H'egarat-Mascle, Sidonie Lefebvre and
Arnaud Woiselle},
journal={arXiv preprint arXiv:2410.07437},
year={2024},
archivePrefix={arXiv},
eprint={2410.07437},
primaryClass={cs.CV}
} | ciocarlan2024robust |
arxiv-667816 | 2410.07441 | Zero-Shot Generalization of Vision-Based RL Without Data Augmentation | <|reference_start|>Zero-Shot Generalization of Vision-Based RL Without Data Augmentation: Generalizing vision-based reinforcement learning (RL) agents to novel environments remains a difficult and open challenge. Current trends are to collect large-scale datasets or use data augmentation techniques to prevent overfitting and improve downstream generalization. However, the computational and data collection costs increase exponentially with the number of task variations and can destabilize the already difficult task of training RL agents. In this work, we take inspiration from recent advances in computational neuroscience and propose a model, Associative Latent DisentAnglement (ALDA), that builds on standard off-policy RL towards zero-shot generalization. Specifically, we revisit the role of latent disentanglement in RL and show how combining it with a model of associative memory achieves zero-shot generalization on difficult task variations without relying on data augmentation. Finally, we formally show that data augmentation techniques are a form of weak disentanglement and discuss the implications of this insight.<|reference_end|> | arxiv | @article{batra2024zero-shot,
title={Zero-Shot Generalization of Vision-Based RL Without Data Augmentation},
author={Sumeet Batra, Gaurav S. Sukhatme},
journal={arXiv preprint arXiv:2410.07441},
year={2024},
archivePrefix={arXiv},
eprint={2410.07441},
primaryClass={cs.LG cs.AI cs.CV cs.RO}
} | batra2024zero-shot |
arxiv-667817 | 2410.07442 | Self-Supervised Learning for Real-World Object Detection: a Survey | <|reference_start|>Self-Supervised Learning for Real-World Object Detection: a Survey: Self-Supervised Learning (SSL) has emerged as a promising approach in computer vision, enabling networks to learn meaningful representations from large unlabeled datasets. SSL methods fall into two main categories: instance discrimination and Masked Image Modeling (MIM). While instance discrimination is fundamental to SSL, it was originally designed for classification and may be less effective for object detection, particularly for small objects. In this survey, we focus on SSL methods specifically tailored for real-world object detection, with an emphasis on detecting small objects in complex environments. Unlike previous surveys, we offer a detailed comparison of SSL strategies, including object-level instance discrimination and MIM methods, and assess their effectiveness for small object detection using both CNN and ViT-based architectures. Specifically, our benchmark is performed on the widely-used COCO dataset, as well as on a specialized real-world dataset focused on vehicle detection in infrared remote sensing imagery. We also assess the impact of pre-training on custom domain-specific datasets, highlighting how certain SSL strategies are better suited for handling uncurated data. Our findings highlight that instance discrimination methods perform well with CNN-based encoders, while MIM methods are better suited for ViT-based architectures and custom dataset pre-training. This survey provides a practical guide for selecting optimal SSL strategies, taking into account factors such as backbone architecture, object size, and custom pre-training requirements. Ultimately, we show that choosing an appropriate SSL pre-training strategy, along with a suitable encoder, significantly enhances performance in real-world object detection, particularly for small object detection in frugal settings.<|reference_end|> | arxiv | @article{ciocarlan2024self-supervised,
title={Self-Supervised Learning for Real-World Object Detection: a Survey},
author={Alina Ciocarlan, Sidonie Lefebvre, Sylvie Le H'egarat-Mascle and
Arnaud Woiselle},
journal={arXiv preprint arXiv:2410.07442},
year={2024},
archivePrefix={arXiv},
eprint={2410.07442},
primaryClass={cs.CV}
} | ciocarlan2024self-supervised |
arxiv-667818 | 2410.07446 | KACQ-DCNN: Uncertainty-Aware Interpretable Kolmogorov-Arnold Classical-Quantum Dual-Channel Neural Network for Heart Disease Detection | <|reference_start|>KACQ-DCNN: Uncertainty-Aware Interpretable Kolmogorov-Arnold Classical-Quantum Dual-Channel Neural Network for Heart Disease Detection: Heart failure remains a major global health challenge, contributing significantly to the 17.8 million annual deaths from cardiovascular disease, highlighting the need for improved diagnostic tools. Current heart disease prediction models based on classical machine learning face limitations, including poor handling of high-dimensional, imbalanced data, limited performance on small datasets, and a lack of uncertainty quantification, while also being difficult for healthcare professionals to interpret. To address these issues, we introduce KACQ-DCNN, a novel classical-quantum hybrid dual-channel neural network that replaces traditional multilayer perceptrons and convolutional layers with Kolmogorov-Arnold Networks (KANs). This approach enhances function approximation with learnable univariate activation functions, reducing model complexity and improving generalization. The KACQ-DCNN 4-qubit 1-layered model significantly outperforms 37 benchmark models across multiple metrics, achieving an accuracy of 92.03%, a macro-average precision, recall, and F1 score of 92.00%, and an ROC-AUC score of 94.77%. Ablation studies demonstrate the synergistic benefits of combining classical and quantum components with KAN. Additionally, explainability techniques like LIME and SHAP provide feature-level insights, improving model transparency, while uncertainty quantification via conformal prediction ensures robust probability estimates. These results suggest that KACQ-DCNN offers a promising path toward more accurate, interpretable, and reliable heart disease predictions, paving the way for advancements in cardiovascular healthcare.<|reference_end|> | arxiv | @article{jahin2024kacq-dcnn:,
title={KACQ-DCNN: Uncertainty-Aware Interpretable Kolmogorov-Arnold
Classical-Quantum Dual-Channel Neural Network for Heart Disease Detection},
author={Md Abrar Jahin, Md. Akmol Masud, M. F. Mridha, Zeyar Aung, Nilanjan
Dey},
journal={arXiv preprint arXiv:2410.07446},
year={2024},
archivePrefix={arXiv},
eprint={2410.07446},
primaryClass={cs.LG}
} | jahin2024kacq-dcnn: |
arxiv-667819 | 2410.07447 | TinyLidarNet: 2D LiDAR-based End-to-End Deep Learning Model for F1TENTH Autonomous Racing | <|reference_start|>TinyLidarNet: 2D LiDAR-based End-to-End Deep Learning Model for F1TENTH Autonomous Racing: Prior research has demonstrated the effectiveness of end-to-end deep learning for robotic navigation, where the control signals are directly derived from raw sensory data. However, the majority of existing end-to-end navigation solutions are predominantly camera-based. In this paper, we introduce TinyLidarNet, a lightweight 2D LiDAR-based end-to-end deep learning model for autonomous racing. An F1TENTH vehicle using TinyLidarNet won 3rd place in the 12th F1TENTH Autonomous Grand Prix competition, demonstrating its competitive performance. We systematically analyze its performance on untrained tracks and computing requirements for real-time processing. We find that TinyLidarNet's 1D Convolutional Neural Network (CNN) based architecture significantly outperforms widely used Multi-Layer Perceptron (MLP) based architecture. In addition, we show that it can be processed in real-time on low-end micro-controller units (MCUs).<|reference_end|> | arxiv | @article{zarrar2024tinylidarnet:,
title={TinyLidarNet: 2D LiDAR-based End-to-End Deep Learning Model for F1TENTH
Autonomous Racing},
author={Mohammed Misbah Zarrar, Qitao Weng, Bakhbyergyen Yerjan, Ahmet
Soyyigit, and Heechul Yun},
journal={arXiv preprint arXiv:2410.07447},
year={2024},
archivePrefix={arXiv},
eprint={2410.07447},
primaryClass={cs.RO cs.AI cs.CV cs.LG}
} | zarrar2024tinylidarnet: |
arxiv-667820 | 2410.07451 | Collective variables of neural networks: empirical time evolution and scaling laws | <|reference_start|>Collective variables of neural networks: empirical time evolution and scaling laws: This work presents a novel means for understanding learning dynamics and scaling relations in neural networks. We show that certain measures on the spectrum of the empirical neural tangent kernel, specifically entropy and trace, yield insight into the representations learned by a neural network and how these can be improved through architecture scaling. These results are demonstrated first on test cases before being shown on more complex networks, including transformers, auto-encoders, graph neural networks, and reinforcement learning studies. In testing on a wide range of architectures, we highlight the universal nature of training dynamics and further discuss how it can be used to understand the mechanisms behind learning in neural networks. We identify two such dominant mechanisms present throughout machine learning training. The first, information compression, is seen through a reduction in the entropy of the NTK spectrum during training, and occurs predominantly in small neural networks. The second, coined structure formation, is seen through an increasing entropy and thus, the creation of structure in the neural network representations beyond the prior established by the network at initialization. Due to the ubiquity of the latter in deep neural network architectures and its flexibility in the creation of feature-rich representations, we argue that this form of evolution of the network's entropy be considered the onset of a deep learning regime.<|reference_end|> | arxiv | @article{tovey2024collective,
title={Collective variables of neural networks: empirical time evolution and
scaling laws},
author={Samuel Tovey, Sven Krippendorf, Michael Spannowsky, Konstantin
Nikolaou, Christian Holm},
journal={arXiv preprint arXiv:2410.07451},
year={2024},
number={IPPP/24/66},
archivePrefix={arXiv},
eprint={2410.07451},
primaryClass={cs.LG physics.comp-ph}
} | tovey2024collective |
arxiv-667821 | 2410.07454 | Representation-Enhanced Neural Knowledge Integration with Application to Large-Scale Medical Ontology Learning | <|reference_start|>Representation-Enhanced Neural Knowledge Integration with Application to Large-Scale Medical Ontology Learning: A large-scale knowledge graph enhances reproducibility in biomedical data discovery by providing a standardized, integrated framework that ensures consistent interpretation across diverse datasets. It improves generalizability by connecting data from various sources, enabling broader applicability of findings across different populations and conditions. Generating reliable knowledge graph, leveraging multi-source information from existing literature, however, is challenging especially with a large number of node sizes and heterogeneous relations. In this paper, we propose a general theoretically guaranteed statistical framework, called RENKI, to enable simultaneous learning of multiple relation types. RENKI generalizes various network models widely used in statistics and computer science. The proposed framework incorporates representation learning output into initial entity embedding of a neural network that approximates the score function for the knowledge graph and continuously trains the model to fit observed facts. We prove nonasymptotic bounds for in-sample and out-of-sample weighted MSEs in relation to the pseudo-dimension of the knowledge graph function class. Additionally, we provide pseudo-dimensions for score functions based on multilayer neural networks with ReLU activation function, in the scenarios when the embedding parameters either fixed or trainable. Finally, we complement our theoretical results with numerical studies and apply the method to learn a comprehensive medical knowledge graph combining a pretrained language model representation with knowledge graph links observed in several medical ontologies. The experiments justify our theoretical findings and demonstrate the effect of weighting in the presence of heterogeneous relations and the benefit of incorporating representation learning in nonparametric models.<|reference_end|> | arxiv | @article{liu2024representation-enhanced,
title={Representation-Enhanced Neural Knowledge Integration with Application to
Large-Scale Medical Ontology Learning},
author={Suqi Liu, Tianxi Cai, Xiaoou Li},
journal={arXiv preprint arXiv:2410.07454},
year={2024},
archivePrefix={arXiv},
eprint={2410.07454},
primaryClass={stat.ME cs.LG math.ST stat.TH}
} | liu2024representation-enhanced |
arxiv-667822 | 2410.07456 | SAGE: Scalable Ground Truth Evaluations for Large Sparse Autoencoders | <|reference_start|>SAGE: Scalable Ground Truth Evaluations for Large Sparse Autoencoders: A key challenge in interpretability is to decompose model activations into meaningful features. Sparse autoencoders (SAEs) have emerged as a promising tool for this task. However, a central problem in evaluating the quality of SAEs is the absence of ground truth features to serve as an evaluation gold standard. Current evaluation methods for SAEs are therefore confronted with a significant trade-off: SAEs can either leverage toy models or other proxies with predefined ground truth features; or they use extensive prior knowledge of realistic task circuits. The former limits the generalizability of the evaluation results, while the latter limits the range of models and tasks that can be used for evaluations. We introduce SAGE: Scalable Autoencoder Ground-truth Evaluation, a ground truth evaluation framework for SAEs that scales to large state-of-the-art SAEs and models. We demonstrate that our method can automatically identify task-specific activations and compute ground truth features at these points. Compared to previous methods we reduce the training overhead by introducing a novel reconstruction method that allows to apply residual stream SAEs to sublayer activations. This eliminates the need for SAEs trained on every task-specific activation location. Then we validate the scalability of our framework, by evaluating SAEs on novel tasks on Pythia70M, GPT-2 Small, and Gemma-2-2. Our framework therefore paves the way for generalizable, large-scale evaluations of SAEs in interpretability research.<|reference_end|> | arxiv | @article{venhoff2024sage:,
title={SAGE: Scalable Ground Truth Evaluations for Large Sparse Autoencoders},
author={Constantin Venhoff, Anisoara Calinescu, Philip Torr, Christian
Schroeder de Witt},
journal={arXiv preprint arXiv:2410.07456},
year={2024},
archivePrefix={arXiv},
eprint={2410.07456},
primaryClass={cs.LG}
} | venhoff2024sage: |
arxiv-667823 | 2410.07457 | Responding to Promises: No-regret learning against followers with memory | <|reference_start|>Responding to Promises: No-regret learning against followers with memory: We consider a repeated Stackelberg game setup where the leader faces a sequence of followers of unknown types and must learn what commitments to make. While previous works have considered followers that best respond to the commitment announced by the leader in every round, we relax this setup in two ways. Motivated by natural scenarios where the leader's reputation factors into how the followers choose their response, we consider followers with memory. Specifically, we model followers that base their response on not just the leader's current commitment but on an aggregate of their past commitments. In developing learning strategies that the leader can employ against such followers, we make the second relaxation and assume boundedly rational followers. In particular, we focus on followers employing quantal responses. Interestingly, we observe that the smoothness property offered by the quantal response (QR) model helps in addressing the challenge posed by learning against followers with memory. Utilizing techniques from online learning, we develop algorithms that guarantee $O(\sqrt{T})$ regret for quantal responding memory-less followers and $O(\sqrt{BT})$ regret for followers with bounded memory of length $B$ with both scaling polynomially in game parameters.<|reference_end|> | arxiv | @article{hebbar2024responding,
title={Responding to Promises: No-regret learning against followers with memory},
author={Vijeth Hebbar and C'edric Langbort},
journal={arXiv preprint arXiv:2410.07457},
year={2024},
archivePrefix={arXiv},
eprint={2410.07457},
primaryClass={cs.GT}
} | hebbar2024responding |
arxiv-667824 | 2410.07458 | Systematic Feature Design for Cycle Life Prediction of Lithium-Ion Batteries During Formation | <|reference_start|>Systematic Feature Design for Cycle Life Prediction of Lithium-Ion Batteries During Formation: Optimization of the formation step in lithium-ion battery manufacturing is challenging due to limited physical understanding of solid electrolyte interphase formation and the long testing time (~100 days) for cells to reach the end of life. We propose a systematic feature design framework that requires minimal domain knowledge for accurate cycle life prediction during formation. Two simple Q(V) features designed from our framework, extracted from formation data without any additional diagnostic cycles, achieved a median of 9.20% error for cycle life prediction, outperforming thousands of autoML models using pre-defined features. We attribute the strong performance of our designed features to their physical origins - the voltage ranges identified by our framework capture the effects of formation temperature and microscopic particle resistance heterogeneity. By designing highly interpretable features, our approach can accelerate formation research, leveraging the interplay between data-driven feature design and mechanistic understanding.<|reference_end|> | arxiv | @article{rhyu2024systematic,
title={Systematic Feature Design for Cycle Life Prediction of Lithium-Ion
Batteries During Formation},
author={Jinwook Rhyu, Joachim Schaeffer, Michael L. Li, Xiao Cui, William C.
Chueh, Martin Z. Bazant, Richard D. Braatz},
journal={arXiv preprint arXiv:2410.07458},
year={2024},
archivePrefix={arXiv},
eprint={2410.07458},
primaryClass={cs.LG stat.AP}
} | rhyu2024systematic |
arxiv-667825 | 2410.07459 | User Feedback in Continuous Software Engineering: Revealing the State-of-Practice | <|reference_start|>User Feedback in Continuous Software Engineering: Revealing the State-of-Practice: Context: Organizations opt for continuous delivery of incremental updates to deal with uncertainty and minimize waste. However, applying continuous engineering (CSE) practices requires a continuous feedback loop with input from customers and end-users. Challenges: It becomes increasingly challenging to apply traditional requirements elicitation and validation techniques with ever-shrinking software delivery cycles. At the same time, frequent deliveries generate an abundance of usage data and telemetry informing engineering teams of end-user behavior. The literature describing how practitioners work with user feedback in CSE, is limited. Objectives: We aim to explore the state of practice related to utilization of user feedback in CSE. Specifically, what practices are used, how, and the shortcomings of these practices. Method: We conduct a qualitative survey and report analysis from 21 interviews in 13 product development companies. We apply thematic and cross-case analysis to interpret the data. Results: Based on our earlier work we suggest a conceptual model of how user feedback is utilized in CSE. We further report the identified challenges with the continuous collection and analysis of user feedback and identify implications for practice. Conclusions: Companies use a combination of qualitative and quantitative methods to infer end-user preferences. At the same time, continuous collection, analysis, interpretation, and use of data in decisions are problematic. The challenges pertain to selecting the right metrics and analysis techniques, resource allocation, and difficulties in accessing vaguely defined user groups. Our advice to practitioners in CSE is to ensure sufficient resources and effort for interpretation of the feedback, which can be facilitated by telemetry dashboards.<|reference_end|> | arxiv | @article{tkalich2024user,
title={User Feedback in Continuous Software Engineering: Revealing the
State-of-Practice},
author={Anastasiia Tkalich, Eriks Klotins, Tor Sporsem, Viktoria Stray, Nils
Brede Moe, Astri Barbala},
journal={arXiv preprint arXiv:2410.07459},
year={2024},
archivePrefix={arXiv},
eprint={2410.07459},
primaryClass={cs.SE}
} | tkalich2024user |
arxiv-667826 | 2410.07460 | Generalizing Segmentation Foundation Model Under Sim-to-real Domain-shift for Guidewire Segmentation in X-ray Fluoroscopy | <|reference_start|>Generalizing Segmentation Foundation Model Under Sim-to-real Domain-shift for Guidewire Segmentation in X-ray Fluoroscopy: Guidewire segmentation during endovascular interventions holds the potential to significantly enhance procedural accuracy, improving visualization and providing critical feedback that can support both physicians and robotic systems in navigating complex vascular pathways. Unlike supervised segmentation networks, which need many expensive expert-annotated labels, sim-to-real domain adaptation approaches utilize synthetic data from simulations, offering a cost-effective solution. The success of models like Segment-Anything (SAM) has driven advancements in image segmentation foundation models with strong zero/few-shot generalization through prompt engineering. However, they struggle with medical images like X-ray fluoroscopy and the domain-shifts of the data. Given the challenges of acquiring annotation and the accessibility of labeled simulation data, we propose a sim-to-real domain adaption framework with a coarse-to-fine strategy to adapt SAM to X-ray fluoroscopy guidewire segmentation without any annotation on the target domain. We first generate the pseudo-labels by utilizing a simple source image style transfer technique that preserves the guidewire structure. Then, we develop a weakly supervised self-training architecture to fine-tune an end-to-end student SAM with the coarse labels by imposing consistency regularization and supervision from the teacher SAM network. We validate the effectiveness of the proposed method on a publicly available Cardiac dataset and an in-house Neurovascular dataset, where our method surpasses both pre-trained SAM and many state-of-the-art domain adaptation techniques by a large margin. Our code will be made public on GitHub soon.<|reference_end|> | arxiv | @article{wen2024generalizing,
title={Generalizing Segmentation Foundation Model Under Sim-to-real
Domain-shift for Guidewire Segmentation in X-ray Fluoroscopy},
author={Yuxuan Wen, Evgenia Roussinova, Olivier Brina, Paolo Machi, Mohamed
Bouri},
journal={arXiv preprint arXiv:2410.07460},
year={2024},
archivePrefix={arXiv},
eprint={2410.07460},
primaryClass={cs.CV}
} | wen2024generalizing |
arxiv-667827 | 2410.07461 | Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning | <|reference_start|>Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning: Network pruning has emerged as a potential solution to make LLMs cheaper to deploy. However, existing LLM pruning approaches universally rely on the C4 dataset as the calibration data for calculating pruning scores, leaving its optimality unexplored. In this study, we evaluate the choice of calibration data on LLM pruning, across a wide range of datasets that are most commonly used in LLM training and evaluation, including four pertaining datasets as well as three categories of downstream tasks encompassing nine datasets. Each downstream dataset is prompted with In-Context Learning (ICL) and Chain-of-Thought (CoT), respectively. Besides the already intriguing observation that the choice of calibration data significantly impacts the performance of pruned LLMs, our results also uncover several subtle and often unexpected findings, summarized as follows: (1) C4 is not the optimal choice for LLM pruning, even among commonly used pre-training datasets; (2) arithmetic datasets, when used as calibration data, performs on par or even better than pre-training datasets; (3) pruning with downstream datasets does not necessarily help the corresponding downstream task, compared to pre-training data; (4) ICL is widely beneficial to all data categories, whereas CoT is only useful on certain tasks. Our findings shed light on the importance of carefully selecting calibration data for LLM pruning and pave the way for more efficient deployment of these powerful models in real-world applications. We release our code at: https://github.com/abx393/llm-pruning-calibration-data.<|reference_end|> | arxiv | @article{bandari2024is,
title={Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data
for LLM Pruning},
author={Abhinav Bandari, Lu Yin, Cheng-Yu Hsieh, Ajay Kumar Jaiswal, Tianlong
Chen, Li Shen, Ranjay Krishna, Shiwei Liu},
journal={arXiv preprint arXiv:2410.07461},
year={2024},
archivePrefix={arXiv},
eprint={2410.07461},
primaryClass={cs.CL}
} | bandari2024is |
arxiv-667828 | 2410.07463 | Language-Guided Joint Audio-Visual Editing via One-Shot Adaptation | <|reference_start|>Language-Guided Joint Audio-Visual Editing via One-Shot Adaptation: In this paper, we introduce a novel task called language-guided joint audio-visual editing. Given an audio and image pair of a sounding event, this task aims at generating new audio-visual content by editing the given sounding event conditioned on the language guidance. For instance, we can alter the background environment of a sounding object while keeping its appearance unchanged, or we can add new sounds contextualized to the visual content. To address this task, we propose a new diffusion-based framework for joint audio-visual editing and introduce two key ideas. Firstly, we propose a one-shot adaptation approach to tailor generative diffusion models for audio-visual content editing. With as few as one audio-visual sample, we jointly transfer the audio and vision diffusion models to the target domain. After fine-tuning, our model enables consistent generation of this audio-visual sample. Secondly, we introduce a cross-modal semantic enhancement approach. We observe that when using language as content editing guidance, the vision branch may overlook editing requirements. This phenomenon, termed catastrophic neglect, hampers audio-visual alignment during content editing. We therefore enhance semantic consistency between language and vision to mitigate this issue. Extensive experiments validate the effectiveness of our method in language-based audio-visual editing and highlight its superiority over several baseline approaches. We recommend that readers visit our project page for more details: https://liangsusan-git.github.io/project/avedit/.<|reference_end|> | arxiv | @article{liang2024language-guided,
title={Language-Guided Joint Audio-Visual Editing via One-Shot Adaptation},
author={Susan Liang, Chao Huang, Yapeng Tian, Anurag Kumar, Chenliang Xu},
journal={arXiv preprint arXiv:2410.07463},
year={2024},
archivePrefix={arXiv},
eprint={2410.07463},
primaryClass={cs.CV}
} | liang2024language-guided |
arxiv-667829 | 2410.07465 | Preconditioning Low Rank Generalized Minimal Residual Method (GMRES) for Implicit Discretizations of Matrix Differential Equations | <|reference_start|>Preconditioning Low Rank Generalized Minimal Residual Method (GMRES) for Implicit Discretizations of Matrix Differential Equations: This work proposes a new class of preconditioners for the low rank Generalized Minimal Residual Method (GMRES) for multiterm matrix equations arising from implicit timestepping of linear matrix differential equations. We are interested in computing low rank solutions to matrix equations, e.g. arising from spatial discretization of stiff partial differential equations (PDEs). The low rank GMRES method is a particular class of Krylov subspace method where the iteration is performed on the low rank factors of the solution. Such methods can exploit the low rank property of the solution to save on computational and storage cost. Of critical importance for the efficiency and applicability of the low rank GMRES method is the availability of an effective low rank preconditioner that operates directly on the low rank factors of the solution and that can limit the iteration count and the maximal Krylov rank. The preconditioner we propose here is based on the basis update and Galerkin (BUG) method, resulting from the dynamic low rank approximation. It is a nonlinear preconditioner for the low rank GMRES scheme that naturally operates on the low rank factors. Extensive numerical tests show that this new preconditioner is highly efficient in limiting iteration count and maximal Krylov rank. We show that the preconditioner performs well for general diffusion equations including highly challenging problems, e.g. high contrast, anisotropic equations. Further, it compares favorably with the state of the art exponential sum preconditioner. We also propose a hybrid BUG - exponential sum preconditioner based on alternating between the two preconditioners.<|reference_end|> | arxiv | @article{meng2024preconditioning,
title={Preconditioning Low Rank Generalized Minimal Residual Method (GMRES) for
Implicit Discretizations of Matrix Differential Equations},
author={Shixu Meng, Daniel Appelo, Yingda Cheng},
journal={arXiv preprint arXiv:2410.07465},
year={2024},
archivePrefix={arXiv},
eprint={2410.07465},
primaryClass={math.NA cs.NA}
} | meng2024preconditioning |
arxiv-667830 | 2410.07466 | Skip Hash: A Fast Ordered Map Via Software Transactional Memory | <|reference_start|>Skip Hash: A Fast Ordered Map Via Software Transactional Memory: Scalable ordered maps must ensure that range queries, which operate over many consecutive keys, provide intuitive semantics (e.g., linearizability) without degrading the performance of concurrent insertions and removals. These goals are difficult to achieve simultaneously when concurrent data structures are built using only locks and compare-and-swap objects. However, recent innovations in software transactional memory (STM) allow programmers to assume that multi-word atomic operations can be fast and simple. This paper introduces the skip hash, a new ordered map designed around that assumption. It combines a skip list and a hash map behind a single abstraction, resulting in $O(1)$ overheads for most operations. The skip hash makes use of a novel range query manager -- again leveraging STM -- to achieve fast, linearizable range queries that do not inhibit scalability. In performance evaluation, we show that the skip hash outperforms the state of the art in almost all cases. This places the skip hash in the uncommon position of being both exceedingly fast and exceedingly simple.<|reference_end|> | arxiv | @article{rodriguez2024skip,
title={Skip Hash: A Fast Ordered Map Via Software Transactional Memory},
author={Matthew Rodriguez, Vitaly Aksenov, Michael Spear},
journal={arXiv preprint arXiv:2410.07466},
year={2024},
archivePrefix={arXiv},
eprint={2410.07466},
primaryClass={cs.DC cs.DS}
} | rodriguez2024skip |
arxiv-667831 | 2410.07468 | Fast Real Evaluation Through Sound Mixed-Precision Tuning | <|reference_start|>Fast Real Evaluation Through Sound Mixed-Precision Tuning: Evaluating a real-valued expression to high precision is a key building block in computational mathematics, physics, and numerics. A typical implementation uses a uniform precision for each operation, and doubles that precision until the real result can be bounded to some sufficiently narrow interval. However, this is wasteful: usually only a few operations really need to be performed at high precision, and the bulk of the expression could use much lower precision. Uniform precision can also waste iterations discovering the necessary precision and then still overestimate by up to a factor of two. We propose to instead use mixed-precision interval arithmetic to evaluate real-valued expressions. A key challenge is deriving the mixed-precision assignment both soundly and quickly. To do so, we introduce a sound variation of error Taylor series and condition numbers, specialized to interval arithmetic, that can be evaluated with minimal overhead thanks to an "exponent trick". Our implementation, Reval, achieves a speed-up of 1.25x compared to the state-of-the-art Sollya tool, with the speed-up increasing to 2.99x on the most difficult input points. An examination of the precisions used with and without precision tuning shows that the speed-up results come from quickly assigning lower precisions for the majority of operations.<|reference_end|> | arxiv | @article{yadrov2024fast,
title={Fast Real Evaluation Through Sound Mixed-Precision Tuning},
author={Artem Yadrov and Pavel Panchekha},
journal={arXiv preprint arXiv:2410.07468},
year={2024},
archivePrefix={arXiv},
eprint={2410.07468},
primaryClass={math.NA cs.MS cs.NA}
} | yadrov2024fast |
arxiv-667832 | 2410.07471 | SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection | <|reference_start|>SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection: Fine-tuning on task-specific data to boost downstream performance is a crucial step for leveraging Large Language Models (LLMs). However, previous studies have demonstrated that fine-tuning the models on several adversarial samples or even benign data can greatly comprise the model's pre-equipped alignment and safety capabilities. In this work, we propose SEAL, a novel framework to enhance safety in LLM fine-tuning. SEAL learns a data ranker based on the bilevel optimization to up rank the safe and high-quality fine-tuning data and down rank the unsafe or low-quality ones. Models trained with SEAL demonstrate superior quality over multiple baselines, with 8.5% and 9.7% win rate increase compared to random selection respectively on Llama-3-8b-Instruct and Merlinite-7b models. Our code is available on github https://github.com/hanshen95/SEAL.<|reference_end|> | arxiv | @article{shen2024seal:,
title={SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection},
author={Han Shen and Pin-Yu Chen and Payel Das and Tianyi Chen},
journal={arXiv preprint arXiv:2410.07471},
year={2024},
archivePrefix={arXiv},
eprint={2410.07471},
primaryClass={cs.LG cs.AI cs.CL}
} | shen2024seal: |
arxiv-667833 | 2410.07472 | Exploring the design space of deep-learning-based weather forecasting systems | <|reference_start|>Exploring the design space of deep-learning-based weather forecasting systems: Despite tremendous progress in developing deep-learning-based weather forecasting systems, their design space, including the impact of different design choices, is yet to be well understood. This paper aims to fill this knowledge gap by systematically analyzing these choices including architecture, problem formulation, pretraining scheme, use of image-based pretrained models, loss functions, noise injection, multi-step inputs, additional static masks, multi-step finetuning (including larger stride models), as well as training on a larger dataset. We study fixed-grid architectures such as UNet, fully convolutional architectures, and transformer-based models, along with grid-invariant architectures, including graph-based and operator-based models. Our results show that fixed-grid architectures outperform grid-invariant architectures, indicating a need for further architectural developments in grid-invariant models such as neural operators. We therefore propose a hybrid system that combines the strong performance of fixed-grid models with the flexibility of grid-invariant architectures. We further show that multi-step fine-tuning is essential for most deep-learning models to work well in practice, which has been a common practice in the past. Pretraining objectives degrade performance in comparison to supervised training, while image-based pretrained models provide useful inductive biases in some cases in comparison to training the model from scratch. Interestingly, we see a strong positive effect of using a larger dataset when training a smaller model as compared to training on a smaller dataset for longer. Larger models, on the other hand, primarily benefit from just an increase in the computational budget. We believe that these results will aid in the design of better weather forecasting systems in the future.<|reference_end|> | arxiv | @article{siddiqui2024exploring,
title={Exploring the design space of deep-learning-based weather forecasting
systems},
author={Shoaib Ahmed Siddiqui, Jean Kossaifi, Boris Bonev, Christopher Choy,
Jan Kautz, David Krueger, Kamyar Azizzadenesheli},
journal={arXiv preprint arXiv:2410.07472},
year={2024},
archivePrefix={arXiv},
eprint={2410.07472},
primaryClass={cs.LG cs.AI}
} | siddiqui2024exploring |
arxiv-667834 | 2410.07473 | Localizing Factual Inconsistencies in Attributable Text Generation | <|reference_start|>Localizing Factual Inconsistencies in Attributable Text Generation: There has been an increasing interest in detecting hallucinations in model-generated texts, both manually and automatically, at varying levels of granularity. However, most existing methods fail to precisely pinpoint the errors. In this work, we introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation, at a fine-grained level. Drawing inspiration from Neo-Davidsonian formal semantics, we propose decomposing the generated text into minimal predicate-argument level propositions, expressed as simple question-answer (QA) pairs, and assess whether each individual QA pair is supported by a trusted reference text. As each QA pair corresponds to a single semantic relation between a predicate and an argument, QASemConsistency effectively localizes the unsupported information. We first demonstrate the effectiveness of the QASemConsistency methodology for human annotation, by collecting crowdsourced annotations of granular consistency errors, while achieving a substantial inter-annotator agreement ($\kappa > 0.7)$. Then, we implement several methods for automatically detecting localized factual inconsistencies, with both supervised entailment models and open-source LLMs.<|reference_end|> | arxiv | @article{cattan2024localizing,
title={Localizing Factual Inconsistencies in Attributable Text Generation},
author={Arie Cattan, Paul Roit, Shiyue Zhang, David Wan, Roee Aharoni, Idan
Szpektor, Mohit Bansal, Ido Dagan},
journal={arXiv preprint arXiv:2410.07473},
year={2024},
archivePrefix={arXiv},
eprint={2410.07473},
primaryClass={cs.CL}
} | cattan2024localizing |
arxiv-667835 | 2410.07475 | Progressive Multi-Modal Fusion for Robust 3D Object Detection | <|reference_start|>Progressive Multi-Modal Fusion for Robust 3D Object Detection: Multi-sensor fusion is crucial for accurate 3D object detection in autonomous driving, with cameras and LiDAR being the most commonly used sensors. However, existing methods perform sensor fusion in a single view by projecting features from both modalities either in Bird's Eye View (BEV) or Perspective View (PV), thus sacrificing complementary information such as height or geometric proportions. To address this limitation, we propose ProFusion3D, a progressive fusion framework that combines features in both BEV and PV at both intermediate and object query levels. Our architecture hierarchically fuses local and global features, enhancing the robustness of 3D object detection. Additionally, we introduce a self-supervised mask modeling pre-training strategy to improve multi-modal representation learning and data efficiency through three novel objectives. Extensive experiments on nuScenes and Argoverse2 datasets conclusively demonstrate the efficacy of ProFusion3D. Moreover, ProFusion3D is robust to sensor failure, demonstrating strong performance when only one modality is available.<|reference_end|> | arxiv | @article{mohan2024progressive,
title={Progressive Multi-Modal Fusion for Robust 3D Object Detection},
author={Rohit Mohan, Daniele Cattaneo, Florian Drews, Abhinav Valada},
journal={arXiv preprint arXiv:2410.07475},
year={2024},
archivePrefix={arXiv},
eprint={2410.07475},
primaryClass={cs.CV}
} | mohan2024progressive |
arxiv-667836 | 2410.07476 | Unifying and Verifying Mechanistic Interpretations: A Case Study with Group Operations | <|reference_start|>Unifying and Verifying Mechanistic Interpretations: A Case Study with Group Operations: A recent line of work in mechanistic interpretability has focused on reverse-engineering the computation performed by neural networks trained on the binary operation of finite groups. We investigate the internals of one-hidden-layer neural networks trained on this task, revealing previously unidentified structure and producing a more complete description of such models that unifies the explanations of previous works. Notably, these models approximate equivariance in each input argument. We verify that our explanation applies to a large fraction of networks trained on this task by translating it into a compact proof of model performance, a quantitative evaluation of model understanding. In particular, our explanation yields a guarantee of model accuracy that runs in 30% the time of brute force and gives a >=95% accuracy bound for 45% of the models we trained. We were unable to obtain nontrivial non-vacuous accuracy bounds using only explanations from previous works.<|reference_end|> | arxiv | @article{wu2024unifying,
title={Unifying and Verifying Mechanistic Interpretations: A Case Study with
Group Operations},
author={Wilson Wu, Louis Jaburi, Jacob Drori, Jason Gross},
journal={arXiv preprint arXiv:2410.07476},
year={2024},
archivePrefix={arXiv},
eprint={2410.07476},
primaryClass={cs.LG stat.ML}
} | wu2024unifying |
arxiv-667837 | 2410.07484 | WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents | <|reference_start|>WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents: Can large language models (LLMs) directly serve as powerful world models for model-based agents? While the gaps between the prior knowledge of LLMs and the specified environment's dynamics do exist, our study reveals that the gaps can be bridged by aligning an LLM with its deployed environment and such "world alignment" can be efficiently achieved by rule learning on LLMs. Given the rich prior knowledge of LLMs, only a few additional rules suffice to align LLM predictions with the specified environment dynamics. To this end, we propose a neurosymbolic approach to learn these rules gradient-free through LLMs, by inducing, updating, and pruning rules based on comparisons of agent-explored trajectories and world model predictions. The resulting world model is composed of the LLM and the learned rules. Our embodied LLM agent "WALL-E" is built upon model-predictive control (MPC). By optimizing look-ahead actions based on the precise world model, MPC significantly improves exploration and learning efficiency. Compared to existing LLM agents, WALL-E's reasoning only requires a few principal rules rather than verbose buffered trajectories being included in the LLM input. On open-world challenges in Minecraft and ALFWorld, WALL-E achieves higher success rates than existing methods, with lower costs on replanning time and the number of tokens used for reasoning. In Minecraft, WALL-E exceeds baselines by 15-30% in success rate while costing 8-20 fewer replanning rounds and only 60-80% of tokens. In ALFWorld, its success rate surges to a new record high of 95% only after 6 iterations.<|reference_end|> | arxiv | @article{zhou2024wall-e:,
title={WALL-E: World Alignment by Rule Learning Improves World Model-based LLM
Agents},
author={Siyu Zhou, Tianyi Zhou, Yijun Yang, Guodong Long, Deheng Ye, Jing
Jiang, Chengqi Zhang},
journal={arXiv preprint arXiv:2410.07484},
year={2024},
archivePrefix={arXiv},
eprint={2410.07484},
primaryClass={cs.AI}
} | zhou2024wall-e: |
arxiv-667838 | 2410.07485 | Gem: Gaussian Mixture Model Embeddings for Numerical Feature Distributions | <|reference_start|>Gem: Gaussian Mixture Model Embeddings for Numerical Feature Distributions: Embeddings are now used to underpin a wide variety of data management tasks, including entity resolution, dataset search and semantic type detection. Such applications often involve datasets with numerical columns, but there has been more emphasis placed on the semantics of categorical data in embeddings than on the distinctive features of numerical data. In this paper, we propose a method called Gem (Gaussian mixture model embeddings) that creates embeddings that build on numerical value distributions from columns. The proposed method specializes a Gaussian Mixture Model (GMM) to identify and cluster columns with similar value distributions. We introduce a signature mechanism that generates a probability matrix for each column, indicating its likelihood of belonging to specific Gaussian components, which can be used for different applications, such as to determine semantic types. Finally, we generate embeddings for three numerical data properties: distributional, statistical, and contextual. Our core method focuses solely on numerical columns without using table names or neighboring columns for context. However, the method can be combined with other types of evidence, and we later integrate attribute names with the Gaussian embeddings to evaluate the method's contribution to improving overall performance. We compare Gem with several baseline methods for numeric only and numeric + context tasks, showing that Gem consistently outperforms the baselines on four benchmark datasets.<|reference_end|> | arxiv | @article{rauf2024gem:,
title={Gem: Gaussian Mixture Model Embeddings for Numerical Feature
Distributions},
author={Hafiz Tayyab Rauf, Alex Bogatu, Norman W. Paton, Andre Freitas},
journal={arXiv preprint arXiv:2410.07485},
year={2024},
archivePrefix={arXiv},
eprint={2410.07485},
primaryClass={cs.DB cs.LG}
} | rauf2024gem: |
arxiv-667839 | 2410.07486 | Visual Writing: Writing by Manipulating Visual Representations of Stories | <|reference_start|>Visual Writing: Writing by Manipulating Visual Representations of Stories: We introduce "visual writing", an approach to writing stories by manipulating visuals instead of words. Visual writing relies on editable visual representations of time, entities, events, and locations to offer representations more suited to specific editing tasks. We propose a taxonomy for these representations and implement a prototype software supporting the visual writing workflow. The system allows writers to edit the story by alternating between modifying the text and manipulating visual representations to edit entities, actions, locations, and order of events. We evaluate this workflow with eight creative writers and find visual writing can help find specific passages, keep track of story elements, specify edits, and explore story variations in a way that encourages creativity.<|reference_end|> | arxiv | @article{masson2024visual,
title={Visual Writing: Writing by Manipulating Visual Representations of
Stories},
author={Damien Masson, Zixin Zhao, Fanny Chevalier},
journal={arXiv preprint arXiv:2410.07486},
year={2024},
archivePrefix={arXiv},
eprint={2410.07486},
primaryClass={cs.HC}
} | masson2024visual |
arxiv-667840 | 2410.07490 | MoDEM: Mixture of Domain Expert Models | <|reference_start|>MoDEM: Mixture of Domain Expert Models: We propose a novel approach to enhancing the performance and efficiency of large language models (LLMs) by combining domain prompt routing with domain-specialized models. We introduce a system that utilizes a BERT-based router to direct incoming prompts to the most appropriate domain expert model. These expert models are specifically tuned for domains such as health, mathematics and science. Our research demonstrates that this approach can significantly outperform general-purpose models of comparable size, leading to a superior performance-to-cost ratio across various benchmarks. The implications of this study suggest a potential paradigm shift in LLM development and deployment. Rather than focusing solely on creating increasingly large, general-purpose models, the future of AI may lie in developing ecosystems of smaller, highly specialized models coupled with sophisticated routing systems. This approach could lead to more efficient resource utilization, reduced computational costs, and superior overall performance.<|reference_end|> | arxiv | @article{simonds2024modem:,
title={MoDEM: Mixture of Domain Expert Models},
author={Toby Simonds, Kemal Kurniawan, Jey Han Lau},
journal={arXiv preprint arXiv:2410.07490},
year={2024},
archivePrefix={arXiv},
eprint={2410.07490},
primaryClass={cs.CL}
} | simonds2024modem: |
arxiv-667841 | 2410.07491 | Transducer Consistency Regularization for Speech to Text Applications | <|reference_start|>Transducer Consistency Regularization for Speech to Text Applications: Consistency regularization is a commonly used practice to encourage the model to generate consistent representation from distorted input features and improve model generalization. It shows significant improvement on various speech applications that are optimized with cross entropy criterion. However, it is not straightforward to apply consistency regularization for the transducer-based approaches, which are widely adopted for speech applications due to the competitive performance and streaming characteristic. The main challenge is from the vast alignment space of the transducer optimization criterion and not all the alignments within the space contribute to the model optimization equally. In this study, we present Transducer Consistency Regularization (TCR), a consistency regularization method for transducer models. We apply distortions such as spec augmentation and dropout to create different data views and minimize the distribution difference. We utilize occupational probabilities to give different weights on transducer output distributions, thus only alignments close to oracle alignments would contribute to the model learning. Our experiments show the proposed method is superior to other consistency regularization implementations and could effectively reduce word error rate (WER) by 4.3\% relatively comparing with a strong baseline on the \textsc{Librispeech} dataset.<|reference_end|> | arxiv | @article{tseng2024transducer,
title={Transducer Consistency Regularization for Speech to Text Applications},
author={Cindy Tseng, Yun Tang, Vijendra Raj Apsingekar},
journal={arXiv preprint arXiv:2410.07491},
year={2024},
archivePrefix={arXiv},
eprint={2410.07491},
primaryClass={cs.CL eess.AS}
} | tseng2024transducer |
arxiv-667842 | 2410.07492 | Simulating the blood transfusion system in Kenya: Modelling methods and exploratory analyses | <|reference_start|>Simulating the blood transfusion system in Kenya: Modelling methods and exploratory analyses: The process of collecting blood from donors and making it available for transfusion requires a complex series of operations involving multiple actors and resources at each step. Ensuring hospitals receive adequate and safe blood for transfusion is a common challenge across low- and middle-income countries, but is rarely addressed from a system level. This paper presents the first use of discrete event simulation to study the blood system in Kenya and to explore the effect of variations and perturbations at different steps of the system on meeting patient blood demand. A process map of the Kenyan blood system was developed to capture critical steps from blood donation to transfusion using interviews with blood bank, hospital, and laboratory personnel at four public hospitals across three counties in Kenya. The blood system was simulated starting with blood collection, a blood bank where blood is tested and stored before it is issued, a major hospital attached to the blood bank, and several smaller hospitals served by the same blood bank. Values for supply-side parameters were based mainly on expert opinion; demand-side parameters were based on data from blood requisitions made in hospital wards, and dispatch of blood from the hospital laboratory. Illustrative examples demonstrate how the model can be used to explore the impacts of changes in blood collection (e.g., prioritising different donor types), blood demand (e.g., differing clinical case mix), and blood distribution (e.g., restocking strategies) on meeting demand at patient level. The model can reveal potential process impediments in the blood system and aid in choosing strategies for improving blood collection, distribution or use. Such a systems approach allows for interventions at different steps in the blood continuum to be tested on blood availability for different patients presenting at diverse hospitals across the country.<|reference_end|> | arxiv | @article{tian2024simulating,
title={Simulating the blood transfusion system in Kenya: Modelling methods and
exploratory analyses},
author={Yiqi Tian, Bo Zeng, Jana MacLeod, Gatwiri Murithi, Cindy M. Makanga,
Hillary Barmasai, Linda Barnes, Rahul S. Bidanda, Tonny Ejilkon Epuu, Robert
Kamu Kaburu, Tecla Chelagat, Jason Madan, Jennifer Makin, Alejandro
Munoz-Valencia, Carolyne Njoki, Kevin Ochieng, Bernard Olayo, Jose Paiz,
Kristina E. Rudd, Mark Yazer, Juan Carlos Puyana, Bopaya Bidanda, Jayant
Rajgopal, Pratap Kumar},
journal={arXiv preprint arXiv:2410.07492},
year={2024},
archivePrefix={arXiv},
eprint={2410.07492},
primaryClass={physics.med-ph cs.SY eess.SY}
} | tian2024simulating |
arxiv-667843 | 2410.07493 | Autonomous Robotic System with Optical Coherence Tomography Guidance for Vascular Anastomosis | <|reference_start|>Autonomous Robotic System with Optical Coherence Tomography Guidance for Vascular Anastomosis: Vascular anastomosis, the surgical connection of blood vessels, is essential in procedures such as organ transplants and reconstructive surgeries. The precision required limits accessibility due to the extensive training needed, with manual suturing leading to variable outcomes and revision rates up to 7.9%. Existing robotic systems, while promising, are either fully teleoperated or lack the capabilities necessary for autonomous vascular anastomosis. We present the Micro Smart Tissue Autonomous Robot (micro-STAR), an autonomous robotic system designed to perform vascular anastomosis on small-diameter vessels. The micro-STAR system integrates a novel suturing tool equipped with Optical Coherence Tomography (OCT) fiber-optic sensor and a microcamera, enabling real-time tissue detection and classification. Our system autonomously places sutures and manipulates tissue with minimal human intervention. In an ex vivo study, micro-STAR achieved outcomes competitive with experienced surgeons in terms of leak pressure, lumen reduction, and suture placement variation, completing 90% of sutures without human intervention. This represents the first instance of a robotic system autonomously performing vascular anastomosis on real tissue, offering significant potential for improving surgical precision and expanding access to high-quality care.<|reference_end|> | arxiv | @article{haworth2024autonomous,
title={Autonomous Robotic System with Optical Coherence Tomography Guidance for
Vascular Anastomosis},
author={Jesse Haworth, Rishi Biswas, Justin Opfermann, Michael Kam, Yaning
Wang, Desire Pantalone, Francis X. Creighton, Robin Yang, Jin U. Kang, and
Axel Krieger},
journal={arXiv preprint arXiv:2410.07493},
year={2024},
archivePrefix={arXiv},
eprint={2410.07493},
primaryClass={cs.RO cs.SY eess.SY}
} | haworth2024autonomous |
arxiv-667844 | 2410.07494 | G$^2$TR: Generalized Grounded Temporal Reasoning for Robot Instruction Following by Combining Large Pre-trained Models | <|reference_start|>G$^2$TR: Generalized Grounded Temporal Reasoning for Robot Instruction Following by Combining Large Pre-trained Models: Consider the scenario where a human cleans a table and a robot observing the scene is instructed with the task "Remove the cloth using which I wiped the table". Instruction following with temporal reasoning requires the robot to identify the relevant past object interaction, ground the object of interest in the present scene, and execute the task according to the human's instruction. Directly grounding utterances referencing past interactions to grounded objects is challenging due to the multi-hop nature of references to past interactions and large space of object groundings in a video stream observing the robot's workspace. Our key insight is to factor the temporal reasoning task as (i) estimating the video interval associated with event reference, (ii) performing spatial reasoning over the interaction frames to infer the intended object (iii) semantically track the object's location till the current scene to enable future robot interactions. Our approach leverages existing large pre-trained models (which possess inherent generalization capabilities) and combines them appropriately for temporal grounding tasks. Evaluation on a video-language corpus acquired with a robot manipulator displaying rich temporal interactions in spatially-complex scenes displays an average accuracy of 70.10%. The dataset, code, and videos are available at https://reail-iitdelhi.github.io/temporalreasoning.github.io/ .<|reference_end|> | arxiv | @article{arora2024g$^{2}$tr:,
title={G$^{2}$TR: Generalized Grounded Temporal Reasoning for Robot Instruction
Following by Combining Large Pre-trained Models},
author={Riya Arora and Niveditha Narendranath and Aman Tambi and Sandeep S.
Zachariah and Souvik Chakraborty and Rohan Paul},
journal={arXiv preprint arXiv:2410.07494},
year={2024},
archivePrefix={arXiv},
eprint={2410.07494},
primaryClass={cs.RO}
} | arora2024g$^{2}$tr: |
arxiv-667845 | 2410.07495 | PublicHearingBR: A Brazilian Portuguese Dataset of Public Hearing Transcripts for Summarization of Long Documents | <|reference_start|>PublicHearingBR: A Brazilian Portuguese Dataset of Public Hearing Transcripts for Summarization of Long Documents: This paper introduces PublicHearingBR, a Brazilian Portuguese dataset designed for summarizing long documents. The dataset consists of transcripts of public hearings held by the Brazilian Chamber of Deputies, paired with news articles and structured summaries containing the individuals participating in the hearing and their statements or opinions. The dataset supports the development and evaluation of long document summarization systems in Portuguese. Our contributions include the dataset, a hybrid summarization system to establish a baseline for future studies, and a discussion on evaluation metrics for summarization involving large language models, addressing the challenge of hallucination in the generated summaries. As a result of this discussion, the dataset also provides annotated data that can be used in Natural Language Inference tasks in Portuguese.<|reference_end|> | arxiv | @article{fernandes2024publichearingbr:,
title={PublicHearingBR: A Brazilian Portuguese Dataset of Public Hearing
Transcripts for Summarization of Long Documents},
author={Leandro Car'isio Fernandes and Guilherme Zeferino Rodrigues Dobins
and Roberto Lotufo and Jayr Alencar Pereira},
journal={arXiv preprint arXiv:2410.07495},
year={2024},
archivePrefix={arXiv},
eprint={2410.07495},
primaryClass={cs.CL}
} | fernandes2024publichearingbr: |
arxiv-667846 | 2410.07497 | Strategic Facility Location via Predictions | <|reference_start|>Strategic Facility Location via Predictions: The facility location with strategic agents is a canonical problem in the literature on mechanism design without money. Recently, Agrawal et. al. considered this problem in the context of machine learning augmented algorithms, where the mechanism designer is also given a prediction of the optimal facility location. An ideal mechanism in this framework produces an outcome that is close to the social optimum when the prediction is accurate (consistency) and gracefully degrades as the prediction deviates from the truth, while retaining some of the worst-case approximation guarantees (robustness). The previous work only addressed this problem in the two-dimensional Euclidean space providing optimal trade-offs between robustness and consistency guarantees for deterministic mechanisms. We consider the problem for \emph{general} metric spaces. Our only assumption is that the metric is continuous, meaning that any pair of points must be connected by a continuous shortest path. We introduce a novel mechanism that in addition to agents' reported locations takes a predicted optimal facility location $\hat{o}$. We call this mechanism $\texttt{Harmonic}$, as it selects one of the reported locations $\tilde{\ell}_i$ with probability inversely proportional to $d(\hat{o},\tilde{\ell}_i)+ \Delta$ for a constant parameter $\Delta$. While \harm \ mechanism is not truthful, we can \emph{characterize the set of undominated strategies} for each agent $i$ as solely consisting of the points on a shortest path from their true location $\ell_i$ to the predicted location $\hat{o}$. We further derive \emph{consistency and robustness guarantees on the Price of Anarchy (PoA)} for the game induced by the mechanism.<|reference_end|> | arxiv | @article{chen2024strategic,
title={Strategic Facility Location via Predictions},
author={Qingyun Chen and Nick Gravin and Sungjin Im},
journal={arXiv preprint arXiv:2410.07497},
year={2024},
archivePrefix={arXiv},
eprint={2410.07497},
primaryClass={cs.GT cs.DS}
} | chen2024strategic |
arxiv-667847 | 2410.07499 | Dense Optimizer : An Information Entropy-Guided Structural Search Method for Dense-like Neural Network Design | <|reference_start|>Dense Optimizer : An Information Entropy-Guided Structural Search Method for Dense-like Neural Network Design: Dense Convolutional Network has been continuously refined to adopt a highly efficient and compact architecture, owing to its lightweight and efficient structure. However, the current Dense-like architectures are mainly designed manually, it becomes increasingly difficult to adjust the channels and reuse level based on past experience. As such, we propose an architecture search method called Dense Optimizer that can search high-performance dense-like network automatically. In Dense Optimizer, we view the dense network as a hierarchical information system, maximize the network's information entropy while constraining the distribution of the entropy across each stage via a power law, thereby constructing an optimization problem. We also propose a branch-and-bound optimization algorithm, tightly integrates power-law principle with search space scaling to solve the optimization problem efficiently. The superiority of Dense Optimizer has been validated on different computer vision benchmark datasets. Specifically, Dense Optimizer completes high-quality search but only costs 4 hours with one CPU. Our searched model DenseNet-OPT achieved a top 1 accuracy of 84.3% on CIFAR-100, which is 5.97% higher than the original one.<|reference_end|> | arxiv | @article{tianyuan2024dense,
title={Dense Optimizer : An Information Entropy-Guided Structural Search Method
for Dense-like Neural Network Design},
author={Liu Tianyuan, Hou Libin, Wang Linyuan, Song Xiyu, Yan Bin},
journal={arXiv preprint arXiv:2410.07499},
year={2024},
archivePrefix={arXiv},
eprint={2410.07499},
primaryClass={cs.CV cs.AI cs.LG}
} | tianyuan2024dense |
arxiv-667848 | 2410.07500 | Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels | <|reference_start|>Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels: Understanding and modeling pedestrian movements in the real world is crucial for applications like motion forecasting and scene simulation. Many factors influence pedestrian movements, such as scene context, individual characteristics, and goals, which are often ignored by the existing human generation methods. Web videos contain natural pedestrian behavior and rich motion context, but annotating them with pre-trained predictors leads to noisy labels. In this work, we propose learning diverse pedestrian movements from web videos. We first curate a large-scale dataset called CityWalkers that captures diverse real-world pedestrian movements in urban scenes. Then, based on CityWalkers, we propose a generative model called PedGen for diverse pedestrian movement generation. PedGen introduces automatic label filtering to remove the low-quality labels and a mask embedding to train with partial labels. It also contains a novel context encoder that lifts the 2D scene context to 3D and can incorporate various context factors in generating realistic pedestrian movements in urban scenes. Experiments show that PedGen outperforms existing baseline methods for pedestrian movement generation by learning from noisy labels and incorporating the context factors. In addition, PedGen achieves zero-shot generalization in both real-world and simulated environments. The code, model, and data will be made publicly available at https://genforce.github.io/PedGen/ .<|reference_end|> | arxiv | @article{liu2024learning,
title={Learning to Generate Diverse Pedestrian Movements from Web Videos with
Noisy Labels},
author={Zhizheng Liu, Joe Lin, Wayne Wu, Bolei Zhou},
journal={arXiv preprint arXiv:2410.07500},
year={2024},
archivePrefix={arXiv},
eprint={2410.07500},
primaryClass={cs.CV}
} | liu2024learning |
arxiv-667849 | 2410.07501 | Inferring biological processes with intrinsic noise from cross-sectional data | <|reference_start|>Inferring biological processes with intrinsic noise from cross-sectional data: Inferring dynamical models from data continues to be a significant challenge in computational biology, especially given the stochastic nature of many biological processes. We explore a common scenario in omics, where statistically independent cross-sectional samples are available at a few time points, and the goal is to infer the underlying diffusion process that generated the data. Existing inference approaches often simplify or ignore noise intrinsic to the system, compromising accuracy for the sake of optimization ease. We circumvent this compromise by inferring the phase-space probability flow that shares the same time-dependent marginal distributions as the underlying stochastic process. Our approach, probability flow inference (PFI), disentangles force from intrinsic stochasticity while retaining the algorithmic ease of ODE inference. Analytically, we prove that for Ornstein-Uhlenbeck processes the regularized PFI formalism yields a unique solution in the limit of well-sampled distributions. In practical applications, we show that PFI enables accurate parameter and force estimation in high-dimensional stochastic reaction networks, and that it allows inference of cell differentiation dynamics with molecular noise, outperforming state-of-the-art approaches.<|reference_end|> | arxiv | @article{maddu2024inferring,
title={Inferring biological processes with intrinsic noise from cross-sectional
data},
author={Suryanarayana Maddu, Victor Chard`es, Michael. J. Shelley},
journal={arXiv preprint arXiv:2410.07501},
year={2024},
archivePrefix={arXiv},
eprint={2410.07501},
primaryClass={cs.LG physics.bio-ph q-bio.QM}
} | maddu2024inferring |
arxiv-667850 | 2410.07502 | Adaptive Batch Size for Privately Finding Second-Order Stationary Points | <|reference_start|>Adaptive Batch Size for Privately Finding Second-Order Stationary Points: There is a gap between finding a first-order stationary point (FOSP) and a second-order stationary point (SOSP) under differential privacy constraints, and it remains unclear whether privately finding an SOSP is more challenging than finding an FOSP. Specifically, Ganesh et al. (2023) demonstrated that an $\alpha$-SOSP can be found with $\alpha=O(\frac{1}{n^{1/3}}+(\frac{\sqrt{d}}{n\epsilon})^{3/7})$, where $n$ is the dataset size, $d$ is the dimension, and $\epsilon$ is the differential privacy parameter. Building on the SpiderBoost algorithm framework, we propose a new approach that uses adaptive batch sizes and incorporates the binary tree mechanism. Our method improves the results for privately finding an SOSP, achieving $\alpha=O(\frac{1}{n^{1/3}}+(\frac{\sqrt{d}}{n\epsilon})^{1/2})$. This improved bound matches the state-of-the-art for finding an FOSP, suggesting that privately finding an SOSP may be achievable at no additional cost.<|reference_end|> | arxiv | @article{liu2024adaptive,
title={Adaptive Batch Size for Privately Finding Second-Order Stationary Points},
author={Daogao Liu, Kunal Talwar},
journal={arXiv preprint arXiv:2410.07502},
year={2024},
archivePrefix={arXiv},
eprint={2410.07502},
primaryClass={cs.LG cs.CR cs.DS stat.ML}
} | liu2024adaptive |
arxiv-667851 | 2410.07503 | Modeling Alzheimer's Disease: From Memory Loss to Plaque & Tangles Formation | <|reference_start|>Modeling Alzheimer's Disease: From Memory Loss to Plaque & Tangles Formation: We employ the Hopfield model as a simplified framework to explore both the memory deficits and the biochemical processes characteristic of Alzheimer's disease. By simulating neuronal death and synaptic degradation through increasing the number of stored patterns and introducing noise into the synaptic weights, we demonstrate hallmark symptoms of dementia, including memory loss, confusion, and delayed retrieval times. As the network's capacity is exceeded, retrieval errors increase, mirroring the cognitive confusion observed in Alzheimer's patients. Additionally, we simulate the impact of synaptic degradation by varying the sparsity of the weight matrix, showing impaired memory recall and reduced retrieval success as noise levels increase. Furthermore, we extend our model to connect memory loss with biochemical processes linked to Alzheimer's. By simulating the role of reduced insulin sensitivity over time, we show how it can trigger increased calcium influx into mitochondria, leading to misfolded proteins and the formation of amyloid plaques. These findings, modeled over time, suggest that both neuronal degradation and metabolic factors contribute to the progressive decline seen in Alzheimer's disease. Our work offers a computational framework for understanding the dual impact of synaptic and metabolic dysfunction in neurodegenerative diseases.<|reference_end|> | arxiv | @article{nangunoori2024modeling,
title={Modeling Alzheimer's Disease: From Memory Loss to Plaque & Tangles
Formation},
author={Sai Nag Anurag Nangunoori, Akshara Karthic Mahadevan},
journal={arXiv preprint arXiv:2410.07503},
year={2024},
archivePrefix={arXiv},
eprint={2410.07503},
primaryClass={q-bio.NC cs.CV eess.IV}
} | nangunoori2024modeling |
arxiv-667852 | 2410.07504 | Using LLMs to Discover Legal Factors | <|reference_start|>Using LLMs to Discover Legal Factors: Factors are a foundational component of legal analysis and computational models of legal reasoning. These factor-based representations enable lawyers, judges, and AI and Law researchers to reason about legal cases. In this paper, we introduce a methodology that leverages large language models (LLMs) to discover lists of factors that effectively represent a legal domain. Our method takes as input raw court opinions and produces a set of factors and associated definitions. We demonstrate that a semi-automated approach, incorporating minimal human involvement, produces factor representations that can predict case outcomes with moderate success, if not yet as well as expert-defined factors can.<|reference_end|> | arxiv | @article{gray2024using,
title={Using LLMs to Discover Legal Factors},
author={Morgan Gray and Jaromir Savelka and Wesley Oliver and Kevin Ashley},
journal={arXiv preprint arXiv:2410.07504},
year={2024},
archivePrefix={arXiv},
eprint={2410.07504},
primaryClass={cs.CL cs.AI}
} | gray2024using |
arxiv-667853 | 2410.07505 | CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression | <|reference_start|>CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression: Post-Training Quantization (PTQ) is an effective technique for compressing Large Language Models (LLMs). While many studies focus on quantizing both weights and activations, it is still a challenge to maintain the accuracy of LLM after activating quantization. To investigate the primary cause, we extend the concept of kernel from linear algebra to quantization functions to define a new term, "quantization kernel", which refers to the set of elements in activations that are quantized to zero. Through quantitative analysis of the quantization kernel, we find that these elements are crucial for maintaining the accuracy of quantized LLMs. With the decrease of quantization kernel, the precision of quantized LLMs increases. If the quantization kernel proportion is kept below 19% for OPT models and below 1% for LLaMA models, the precision loss from quantizing activations to INT8 becomes negligible. Motivated by the goal of developing a quantization method with small quantization kernel, we propose CrossQuant: a simple yet effective method for quantizing activations. CrossQuant cross-quantizes elements using row and column-wise absolute maximum vectors, achieving a quantization kernel of approximately 16% for OPT models and less than 0.1% for LLaMA models. Experimental results on LLMs (LLaMA, OPT) ranging from 6.7B to 70B parameters demonstrate that CrossQuant improves or maintains perplexity and accuracy in language modeling, zero-shot, and few-shot tasks.<|reference_end|> | arxiv | @article{liu2024crossquant:,
title={CrossQuant: A Post-Training Quantization Method with Smaller
Quantization Kernel for Precise Large Language Model Compression},
author={Wenyuan Liu, Xindian Ma, Peng Zhang, Yan Wang},
journal={arXiv preprint arXiv:2410.07505},
year={2024},
archivePrefix={arXiv},
eprint={2410.07505},
primaryClass={cs.LG cs.AI}
} | liu2024crossquant: |
arxiv-667854 | 2410.07507 | Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs) | <|reference_start|>Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs): Decoding and expressing brain activity in a comprehensible form is a challenging frontier in AI. This paper presents Thought2Text, which uses instruction-tuned Large Language Models (LLMs) fine-tuned with EEG data to achieve this goal. The approach involves three stages: (1) training an EEG encoder for visual feature extraction, (2) fine-tuning LLMs on image and text data, enabling multimodal description generation, and (3) further fine-tuning on EEG embeddings to generate text directly from EEG during inference. Experiments on a public EEG dataset collected for six subjects with image stimuli demonstrate the efficacy of multimodal LLMs (LLaMa-v3, Mistral-v0.3, Qwen2.5), validated using traditional language generation evaluation metrics, GPT-4 based assessments, and evaluations by human expert. This approach marks a significant advancement towards portable, low-cost "thoughts-to-text" technology with potential applications in both neuroscience and natural language processing (NLP).<|reference_end|> | arxiv | @article{mishra2024thought2text:,
title={Thought2Text: Text Generation from EEG Signal using Large Language
Models (LLMs)},
author={Abhijit Mishra, Shreya Shukla, Jose Torres, Jacek Gwizdka, Shounak
Roychowdhury},
journal={arXiv preprint arXiv:2410.07507},
year={2024},
archivePrefix={arXiv},
eprint={2410.07507},
primaryClass={cs.CL}
} | mishra2024thought2text: |
arxiv-667855 | 2410.07508 | MOLA: Enhancing Industrial Process Monitoring Using Multi-Block Orthogonal Long Short-Term Memory Autoencoder | <|reference_start|>MOLA: Enhancing Industrial Process Monitoring Using Multi-Block Orthogonal Long Short-Term Memory Autoencoder: In this work, we introduce MOLA: a Multi-block Orthogonal Long short-term memory Autoencoder paradigm, to conduct accurate, reliable fault detection of industrial processes. To achieve this, MOLA effectively extracts dynamic orthogonal features by introducing an orthogonality-based loss function to constrain the latent space output. This helps eliminate the redundancy in the features identified, thereby improving the overall monitoring performance. On top of this, a multi-block monitoring structure is proposed, which categorizes the process variables into multiple blocks by leveraging expert process knowledge about their associations with the overall process. Each block is associated with its specific Orthogonal Long short-term memory Autoencoder model, whose extracted dynamic orthogonal features are monitored by distance-based Hotelling's $T^2$ statistics and quantile-based cumulative sum (CUSUM) designed for multivariate data streams that are nonparametric, heterogeneous in nature. Compared to having a single model accounting for all process variables, such a multi-block structure improves the overall process monitoring performance significantly, especially for large-scale industrial processes. Finally, we propose an adaptive weight-based Bayesian fusion (W-BF) framework to aggregate all block-wise monitoring statistics into a global statistic that we monitor for faults, with the goal of improving fault detection speed by assigning weights to blocks based on the sequential order where alarms are raised. We demonstrate the efficiency and effectiveness of our MOLA framework by applying it to the Tennessee Eastman Process and comparing the performance with various benchmark methods.<|reference_end|> | arxiv | @article{ma2024mola:,
title={MOLA: Enhancing Industrial Process Monitoring Using Multi-Block
Orthogonal Long Short-Term Memory Autoencoder},
author={Fangyuan Ma and Cheng Ji and Jingde Wang and Wei Sun and Xun Tang and
Zheyu Jiang},
journal={arXiv preprint arXiv:2410.07508},
year={2024},
archivePrefix={arXiv},
eprint={2410.07508},
primaryClass={cs.LG}
} | ma2024mola: |
arxiv-667856 | 2410.07511 | CSGDN: Contrastive Signed Graph Diffusion Network for Predicting Crop Gene-Trait Associations | <|reference_start|>CSGDN: Contrastive Signed Graph Diffusion Network for Predicting Crop Gene-Trait Associations: Positive and negative association preidiction between gene and trait help studies for crops to perform complex physiological functions. The transcription and regulation activity of specific genes will be adjusted accordingly in different cell types, developmental stages, and physiological states to meet the needs of organisms. Determing gene-trait associations can resolve the mechanism of trait formation and benefit the improvement of crop yield and quality. There are the following two problems in obtaining the positive/negative associations between gene and trait: 1) High-throughput DNA/RNA sequencing and trait data collection are expensive and time-consuming due to the need to process large sample sizes; 2) experiments introduce both random and systematic errors, and, at the same time, calculations or predictions using software or models may produce noise. To address these two issues, we propose a Contrastive Signed Graph Diffusion Network, CSGDN, to learn robust node representations with fewer training samples to achieve higher link prediction accuracy. CSGDN employs a signed graph diffusion method to uncover the underlying regulatory associations between genes and traits. Then, stochastic perterbation strategies are used to create two views for both original and diffusive graphs. At last, a multi-view contrastive learning paradigm loss is designed to unify the node presentations learned from the two views to resist interference and reduce noise. We conduct experiments to validate the performance of CSGDN on three crop datasets: Gossypium hirsutum, Brassica napus, and Triticum turgidum. The results demonstrate that the proposed model outperforms state-of-the-art methods by up to 9.28% AUC for link sign prediction in G. hirsutum dataset.<|reference_end|> | arxiv | @article{pan2024csgdn:,
title={CSGDN: Contrastive Signed Graph Diffusion Network for Predicting Crop
Gene-phenotype Associations},
author={Yiru Pan, Xingyu Ji, Jiaqi You, Lu Li, Zhenping Liu, Xianlong Zhang,
Zeyu Zhang and Maojun Wang},
journal={arXiv preprint arXiv:2410.07511},
year={2024},
archivePrefix={arXiv},
eprint={2410.07511},
primaryClass={cs.LG}
} | pan2024csgdn: |
arxiv-667857 | 2410.07513 | Evolutionary Contrastive Distillation for Language Model Alignment | <|reference_start|>Evolutionary Contrastive Distillation for Language Model Alignment: The ability of large language models (LLMs) to execute complex instructions is essential for their real-world applications. However, several recent studies indicate that LLMs struggle with challenging instructions. In this paper, we propose Evolutionary Contrastive Distillation (ECD), a novel method for generating high-quality synthetic preference data designed to enhance the complex instruction-following capability of language models. ECD generates data that specifically illustrates the difference between a response that successfully follows a set of complex instructions and a response that is high-quality, but nevertheless makes some subtle mistakes. This is done by prompting LLMs to progressively evolve simple instructions to more complex instructions. When the complexity of an instruction is increased, the original successful response to the original instruction becomes a "hard negative" response for the new instruction, mostly meeting requirements of the new instruction, but barely missing one or two. By pairing a good response with such a hard negative response, and employing contrastive learning algorithms such as DPO, we improve language models' ability to follow complex instructions. Empirically, we observe that our method yields a 7B model that exceeds the complex instruction-following performance of current SOTA 7B models and is competitive even with open-source 70B models.<|reference_end|> | arxiv | @article{katz-samuels2024evolutionary,
title={Evolutionary Contrastive Distillation for Language Model Alignment},
author={Julian Katz-Samuels, Zheng Li, Hyokun Yun, Priyanka Nigam, Yi Xu,
Vaclav Petricek, Bing Yin, Trishul Chilimbi},
journal={arXiv preprint arXiv:2410.07513},
year={2024},
archivePrefix={arXiv},
eprint={2410.07513},
primaryClass={cs.LG cs.AI cs.CL}
} | katz-samuels2024evolutionary |
arxiv-667858 | 2410.07514 | O1O: Grouping of Known Classes to Identify Unknown Objects as Odd-One-Out | <|reference_start|>O1O: Grouping of Known Classes to Identify Unknown Objects as Odd-One-Out: Object detection methods trained on a fixed set of known classes struggle to detect objects of unknown classes in the open-world setting. Current fixes involve adding approximate supervision with pseudo-labels corresponding to candidate locations of objects, typically obtained in a class-agnostic manner. While previous approaches mainly rely on the appearance of objects, we find that geometric cues improve unknown recall. Although additional supervision from pseudo-labels helps to detect unknown objects, it also introduces confusion for known classes. We observed a notable decline in the model's performance for detecting known objects in the presence of noisy pseudo-labels. Drawing inspiration from studies on human cognition, we propose to group known classes into superclasses. By identifying similarities between classes within a superclass, we can identify unknown classes through an odd-one-out scoring mechanism. Our experiments on open-world detection benchmarks demonstrate significant improvements in unknown recall, consistently across all tasks. Crucially, we achieve this without compromising known performance, thanks to better partitioning of the feature space with superclasses.<|reference_end|> | arxiv | @article{yavuz2024o1o:,
title={O1O: Grouping of Known Classes to Identify Unknown Objects as
Odd-One-Out},
author={M{i}sra Yavuz and Fatma G"uney},
journal={arXiv preprint arXiv:2410.07514},
year={2024},
archivePrefix={arXiv},
eprint={2410.07514},
primaryClass={cs.CV}
} | yavuz2024o1o: |
arxiv-667859 | 2410.07516 | Exploring and Lifting the Robustness of LLM-powered Automated Program Repair with Metamorphic Testing | <|reference_start|>Exploring and Lifting the Robustness of LLM-powered Automated Program Repair with Metamorphic Testing: In recent years, Large language model-powered Automated Program Repair (LAPR) techniques have achieved state-of-the-art bug-fixing performance and have been pervasively applied and studied in both industry and academia. Nonetheless, LLMs were proved to be highly sensitive to input prompts, with slight differences in the expressions of semantically equivalent programs potentially causing repair failures. Therefore, it is crucial to conduct robustness testing on LAPR techniques before their practical deployment. However, related research is scarce. To this end, we propose MT-LAPR, a Metamorphic Testing framework exclusively for LAPR techniques, which summarizes nine widely-recognized Metamorphic Relations (MRs) by developers across three perturbation levels: token, statement, and block. Afterward, our proposed MRs are applied to buggy codes to generate test cases, which are semantically equivalent yet to affect the inference of LAPR. Experiments are carried out on two extensively examined bug-fixing datasets, i.e., Defect4J and QuixBugs, and four bug-fixing abled LLMs released recently, demonstrating that 34.4% - 48.5% of the test cases expose the instability of LAPR techniques on average, showing the effectiveness of MT-LAPR and uncovering a positive correlation between code readability and the robustness of LAPR techniques. Inspired by the above findings, this paper uses the test cases generated by MT-LAPR as samples to train a CodeT5-based code editing model aiming at improving code readability and then embeds it into the LAPR workflow as a data preprocessing step. Extensive experiments demonstrate that this approach significantly enhances the robustness of LAPR by 49.32% at most.<|reference_end|> | arxiv | @article{xue2024exploring,
title={Exploring and Lifting the Robustness of LLM-powered Automated Program
Repair with Metamorphic Testing},
author={Pengyu Xue, Linhao Wu, Zhen Yang, Xinyi Li, Zhongxing Yu, Zhi Jin, Ge
Li, Yan Xiao, Jingwen Wu},
journal={arXiv preprint arXiv:2410.07516},
year={2024},
archivePrefix={arXiv},
eprint={2410.07516},
primaryClass={cs.SE}
} | xue2024exploring |
arxiv-667860 | 2410.07518 | Exploring the Landscape of Distributed Graph Sketching | <|reference_start|>Exploring the Landscape of Distributed Graph Sketching: Recent work has initiated the study of dense graph processing using graph sketching methods, which drastically reduce space costs by lossily compressing information about the input graph. In this paper, we explore the strange and surprising performance landscape of sketching algorithms. We highlight both their surprising advantages for processing dense graphs that were previously prohibitively expensive to study, as well as the current limitations of the technique. Most notably, we show how sketching can avoid bottlenecks that limit conventional graph processing methods. Single-machine streaming graph processing systems are typically bottlenecked by CPU performance, and distributed graph processing systems are typically bottlenecked by network latency. We present Landscape, a distributed graph-stream processing system that uses linear sketching to distribute the CPU work of computing graph properties to distributed workers with no need for worker-to-worker communication. As a result, it overcomes the CPU and network bottlenecks that limit other systems. In fact, for the connected components problem, Landscape achieves a stream ingestion rate one-fourth that of maximum sustained RAM bandwidth, and is four times faster than random access RAM bandwidth. Additionally, we prove that for any sequence of graph updates and queries Landscape consumes at most a constant factor more network bandwidth than is required to receive the input stream. We show that this system can ingest up to 332 million stream updates per second on a graph with $2^{17}$ vertices. We show that it scales well with more distributed compute power: given a cluster of 40 distributed worker machines, it can ingest updates 35 times as fast as with 1 distributed worker machine. Landscape uses heuristics to reduce its query latency by up to four orders of magnitude over the prior state of the art.<|reference_end|> | arxiv | @article{tench2024exploring,
title={Exploring the Landscape of Distributed Graph Sketching},
author={David Tench, Evan T. West, Kenny Zhang, Michael Bender, Daniel DeLayo,
Martin Farach-Colton, Gilvir Gill, Tyler Seip, Victor Zhang},
journal={arXiv preprint arXiv:2410.07518},
year={2024},
archivePrefix={arXiv},
eprint={2410.07518},
primaryClass={cs.DC}
} | tench2024exploring |
arxiv-667861 | 2410.07519 | MEMS Gyroscope Multi-Feature Calibration Using Machine Learning Technique | <|reference_start|>MEMS Gyroscope Multi-Feature Calibration Using Machine Learning Technique: Gyroscopes are crucial for accurate angular velocity measurements in navigation, stabilization, and control systems. MEMS gyroscopes offer advantages like compact size and low cost but suffer from errors and inaccuracies that are complex and time varying. This study leverages machine learning (ML) and uses multiple signals of the MEMS resonator gyroscope to improve its calibration. XGBoost, known for its high predictive accuracy and ability to handle complex, non-linear relationships, and MLP, recognized for its capability to model intricate patterns through multiple layers and hidden dimensions, are employed to enhance the calibration process. Our findings show that both XGBoost and MLP models significantly reduce noise and enhance accuracy and stability, outperforming the traditional calibration techniques. Despite higher computational costs, DL models are ideal for high-stakes applications, while ML models are efficient for consumer electronics and environmental monitoring. Both ML and DL models demonstrate the potential of advanced calibration techniques in enhancing MEMS gyroscope performance and calibration efficiency.<|reference_end|> | arxiv | @article{long2024mems,
title={MEMS Gyroscope Multi-Feature Calibration Using Machine Learning
Technique},
author={Yaoyao Long, Zhenming Liu, Cong Hao, Farrokh Ayazi},
journal={arXiv preprint arXiv:2410.07519},
year={2024},
archivePrefix={arXiv},
eprint={2410.07519},
primaryClass={cs.LG eess.SP}
} | long2024mems |
arxiv-667862 | 2410.07520 | News Reporter: A Multi-lingual LLM Framework for Broadcast TV News | <|reference_start|>News Reporter: A Multi-lingual LLM Framework for Broadcast TV News: Large Language Models (LLMs) have fast become an essential tools to many conversational chatbots due to their ability to provide coherent answers for varied queries. Datasets used to train these LLMs are often a mix of generic and synthetic samples, thus lacking the verification needed to provide correct and verifiable answers for T.V. News. We collect and share a large collection of QA pairs extracted from transcripts of news recordings from various news-channels across the United States. Resultant QA pairs are then used to fine-tune an off-the-shelf LLM model. Our model surpasses base models of similar size on several open LLM benchmarks. We further integrate and propose a RAG method to improve contextualization of our answers and also point it to a verifiable news recording.<|reference_end|> | arxiv | @article{jain2024news,
title={News Reporter: A Multi-lingual LLM Framework for Broadcast T.V News},
author={Tarun Jain, Yufei Gao, Sridhar Vanga, Karan Singla},
journal={arXiv preprint arXiv:2410.07520},
year={2024},
archivePrefix={arXiv},
eprint={2410.07520},
primaryClass={cs.CL}
} | jain2024news |
arxiv-667863 | 2410.07523 | DemoShapley: Valuation of Demonstrations for In-Context Learning | <|reference_start|>DemoShapley: Valuation of Demonstrations for In-Context Learning: Large language models (LLMs) leveraging in-context learning (ICL) have set new benchmarks in few-shot learning across various tasks without needing task-specific fine-tuning. However, extensive research has demonstrated that the effectiveness of ICL is significantly influenced by the selection and ordering of demonstrations. Considering the critical role of demonstration selection in ICL, we introduce DemoShapley which is inspired by the Data Shapley valuation theorem. This approach assesses the influence of individual demonstration instances, distinguishing between those that contribute positively and those that may hinder performance. Our findings reveal that DemoShapley not only enhances model performance in terms of accuracy and fairness but also generalizes queries from domains distinct from those of the in-context demonstrations, highlighting its versatility and effectiveness in optimizing ICL demonstration selection. Last but not least, DemoShapley demonstrates its ability to aid in identifying noisy data within the demonstration set.<|reference_end|> | arxiv | @article{xie2024demoshapley:,
title={DemoShapley: Valuation of Demonstrations for In-Context Learning},
author={Shan Xie, Man Luo, Chadly Daniel Stern, Mengnan Du, Lu Cheng},
journal={arXiv preprint arXiv:2410.07523},
year={2024},
archivePrefix={arXiv},
eprint={2410.07523},
primaryClass={cs.CL cs.AI cs.LG}
} | xie2024demoshapley: |
arxiv-667864 | 2410.07524 | Upcycling Large Language Models into Mixture of Experts | <|reference_start|>Upcycling Large Language Models into Mixture of Experts: Upcycling pre-trained dense language models into sparse mixture-of-experts (MoE) models is an efficient approach to increase the model capacity of already trained models. However, optimal techniques for upcycling at scale remain unclear. In this work, we conduct an extensive study of upcycling methods and hyperparameters for billion-parameter scale language models. We propose a novel "virtual group" initialization scheme and weight scaling approach to enable upcycling into fine-grained MoE architectures. Through ablations, we find that upcycling outperforms continued dense model training. In addition, we show that softmax-then-topK expert routing improves over topK-then-softmax approach and higher granularity MoEs can help improve accuracy. Finally, we upcycled Nemotron-4 15B on 1T tokens and compared it to a continuously trained version of the same model on the same 1T tokens: the continuous trained model achieved 65.3% MMLU, whereas the upcycled model achieved 67.6%. Our results offer insights and best practices to effectively leverage upcycling for building MoE language models.<|reference_end|> | arxiv | @article{he2024upcycling,
title={Upcycling Large Language Models into Mixture of Experts},
author={Ethan He, Abhinav Khattar, Ryan Prenger, Vijay Korthikanti, Zijie Yan,
Tong Liu, Shiqing Fan, Ashwath Aithal, Mohammad Shoeybi, Bryan Catanzaro},
journal={arXiv preprint arXiv:2410.07524},
year={2024},
archivePrefix={arXiv},
eprint={2410.07524},
primaryClass={cs.CL cs.AI cs.LG}
} | he2024upcycling |
arxiv-667865 | 2410.07525 | Offline Inverse Constrained Reinforcement Learning for Safe-Critical Decision Making in Healthcare | <|reference_start|>Offline Inverse Constrained Reinforcement Learning for Safe-Critical Decision Making in Healthcare: Reinforcement Learning (RL) applied in healthcare can lead to unsafe medical decisions and treatment, such as excessive dosages or abrupt changes, often due to agents overlooking common-sense constraints. Consequently, Constrained Reinforcement Learning (CRL) is a natural choice for safe decisions. However, specifying the exact cost function is inherently difficult in healthcare. Recent Inverse Constrained Reinforcement Learning (ICRL) is a promising approach that infers constraints from expert demonstrations. ICRL algorithms model Markovian decisions in an interactive environment. These settings do not align with the practical requirement of a decision-making system in healthcare, where decisions rely on historical treatment recorded in an offline dataset. To tackle these issues, we propose the Constraint Transformer (CT). Specifically, 1) we utilize a causal attention mechanism to incorporate historical decisions and observations into the constraint modeling, while employing a Non-Markovian layer for weighted constraints to capture critical states. 2) A generative world model is used to perform exploratory data augmentation, enabling offline RL methods to simulate unsafe decision sequences. In multiple medical scenarios, empirical results demonstrate that CT can capture unsafe states and achieve strategies that approximate lower mortality rates, reducing the occurrence probability of unsafe behaviors.<|reference_end|> | arxiv | @article{fang2024offline,
title={Offline Inverse Constrained Reinforcement Learning for Safe-Critical
Decision Making in Healthcare},
author={Nan Fang, Guiliang Liu and Wei Gong},
journal={arXiv preprint arXiv:2410.07525},
year={2024},
archivePrefix={arXiv},
eprint={2410.07525},
primaryClass={cs.LG cs.AI}
} | fang2024offline |
arxiv-667866 | 2410.07526 | MKGL: Mastery of a Three-Word Language | <|reference_start|>MKGL: Mastery of a Three-Word Language: Large language models (LLMs) have significantly advanced performance across a spectrum of natural language processing (NLP) tasks. Yet, their application to knowledge graphs (KGs), which describe facts in the form of triplets and allow minimal hallucinations, remains an underexplored frontier. In this paper, we investigate the integration of LLMs with KGs by introducing a specialized KG Language (KGL), where a sentence precisely consists of an entity noun, a relation verb, and ends with another entity noun. Despite KGL's unfamiliar vocabulary to the LLM, we facilitate its learning through a tailored dictionary and illustrative sentences, and enhance context understanding via real-time KG context retrieval and KGL token embedding augmentation. Our results reveal that LLMs can achieve fluency in KGL, drastically reducing errors compared to conventional KG embedding methods on KG completion. Furthermore, our enhanced LLM shows exceptional competence in generating accurate three-word sentences from an initial entity and interpreting new unseen terms out of KGs.<|reference_end|> | arxiv | @article{guo2024mkgl:,
title={MKGL: Mastery of a Three-Word Language},
author={Lingbing Guo, Zhongpu Bo, Zhuo Chen, Yichi Zhang, Jiaoyan Chen, Yarong
Lan, Mengshu Sun, Zhiqiang Zhang, Yangyifei Luo, Qian Li, Qiang Zhang, Wen
Zhang, Huajun Chen},
journal={arXiv preprint arXiv:2410.07526},
year={2024},
archivePrefix={arXiv},
eprint={2410.07526},
primaryClass={cs.CL cs.AI}
} | guo2024mkgl: |
arxiv-667867 | 2410.07527 | Enhanced physics-informed neural networks (PINNs) for high-order power grid dynamics | <|reference_start|>Enhanced physics-informed neural networks (PINNs) for high-order power grid dynamics: We develop improved physics-informed neural networks (PINNs) for high-order and high-dimensional power system models described by nonlinear ordinary differential equations. We propose some novel enhancements to improve PINN training and accuracy and also implement several other recently proposed ideas from the literature. We successfully apply these to study the transient dynamics of synchronous generators. We also make progress towards applying PINNs to advanced inverter models. Such enhanced PINNs can allow us to accelerate high-fidelity simulations needed to ensure a stable and reliable renewables-rich future grid.<|reference_end|> | arxiv | @article{nair2024enhanced,
title={Enhanced physics-informed neural networks (PINNs) for high-order power
grid dynamics},
author={Vineet Jagadeesan Nair},
journal={arXiv preprint arXiv:2410.07527},
year={2024},
archivePrefix={arXiv},
eprint={2410.07527},
primaryClass={cs.LG cs.SY eess.SY}
} | nair2024enhanced |
arxiv-667868 | 2410.07528 | CountMamba: Exploring Multi-directional Selective State-Space Models for Plant Counting | <|reference_start|>CountMamba: Exploring Multi-directional Selective State-Space Models for Plant Counting: Plant counting is essential in every stage of agriculture, including seed breeding, germination, cultivation, fertilization, pollination yield estimation, and harvesting. Inspired by the fact that humans count objects in high-resolution images by sequential scanning, we explore the potential of handling plant counting tasks via state space models (SSMs) for generating counting results. In this paper, we propose a new counting approach named CountMamba that constructs multiple counting experts to scan from various directions simultaneously. Specifically, we design a Multi-directional State-Space Group to process the image patch sequences in multiple orders and aim to simulate different counting experts. We also design Global-Local Adaptive Fusion to adaptively aggregate global features extracted from multiple directions and local features extracted from the CNN branch in a sample-wise manner. Extensive experiments demonstrate that the proposed CountMamba performs competitively on various plant counting tasks, including maize tassels, wheat ears, and sorghum head counting.<|reference_end|> | arxiv | @article{he2024countmamba:,
title={CountMamba: Exploring Multi-directional Selective State-Space Models for
Plant Counting},
author={Hulingxiao He, Yaqi Zhang, Jinglin Xu, Yuxin Peng},
journal={arXiv preprint arXiv:2410.07528},
year={2024},
archivePrefix={arXiv},
eprint={2410.07528},
primaryClass={cs.CV}
} | he2024countmamba: |
arxiv-667869 | 2410.07530 | Audio Explanation Synthesis with Generative Foundation Models | <|reference_start|>Audio Explanation Synthesis with Generative Foundation Models: The increasing success of audio foundation models across various tasks has led to a growing need for improved interpretability to understand their intricate decision-making processes better. Existing methods primarily focus on explaining these models by attributing importance to elements within the input space based on their influence on the final decision. In this paper, we introduce a novel audio explanation method that capitalises on the generative capacity of audio foundation models. Our method leverages the intrinsic representational power of the embedding space within these models by integrating established feature attribution techniques to identify significant features in this space. The method then generates listenable audio explanations by prioritising the most important features. Through rigorous benchmarking against standard datasets, including keyword spotting and speech emotion recognition, our model demonstrates its efficacy in producing audio explanations.<|reference_end|> | arxiv | @article{akman2024audio,
title={Audio Explanation Synthesis with Generative Foundation Models},
author={Alican Akman, Qiyang Sun, Bj"orn W. Schuller},
journal={arXiv preprint arXiv:2410.07530},
year={2024},
archivePrefix={arXiv},
eprint={2410.07530},
primaryClass={cs.SD cs.AI eess.AS}
} | akman2024audio |
arxiv-667870 | 2410.07531 | Reducing the Cost of Dropout in Flash-Attention by Hiding RNG with GEMM | <|reference_start|>Reducing the Cost of Dropout in Flash-Attention by Hiding RNG with GEMM: Dropout, a network operator, when enabled is likely to dramatically impact the performance of Flash-Attention, which in turn increases the end-to-end training time of Large-Language-Models (LLMs). The main contributor to such performance degradation is the Random Number Generation (RNG) phase that is traditionally fused into the Flash-Attention kernel. As RNG and Attention have the same hardware bottlenecks, RNG latency can hardly be hidden within the Attention kernel. We propose overlapping RNG with previous GEMM layers in the network to hide RNG runtime and improve end-to-end performance. RNG and GEMM have distinct resource requirements and hardware bottlenecks, so they can run in parallel without compromising each other's performance. Our fine-grained performance model, cross-validated by silicon results, shows 1.14x speedup on one transformer block (including multi-head attention and feed-forward layers) for Llama2, and up to 1.23x speedup when varying workload sizes, on GH100 GPUs with FP8 precision. Further, we extend our theoretical model to different RNG implementations and hardware architectures, and discuss the widely applicable benefits for overlapping RNG with GEMM layers.<|reference_end|> | arxiv | @article{ma2024reducing,
title={Reducing the Cost of Dropout in Flash-Attention by Hiding RNG with GEMM},
author={Haiyue Ma, Jian Liu, Ronny Krashinsky},
journal={arXiv preprint arXiv:2410.07531},
year={2024},
archivePrefix={arXiv},
eprint={2410.07531},
primaryClass={cs.AR cs.AI}
} | ma2024reducing |
arxiv-667871 | 2410.07533 | Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification | <|reference_start|>Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification: In linear bandits, how can a learner effectively learn when facing corrupted rewards? While significant work has explored this question, a holistic understanding across different adversarial models and corruption measures is lacking, as is a full characterization of the minimax regret bounds. In this work, we compare two types of corruptions commonly considered: strong corruption, where the corruption level depends on the action chosen by the learner, and weak corruption, where the corruption level does not depend on the action chosen by the learner. We provide a unified framework to analyze these corruptions. For stochastic linear bandits, we fully characterize the gap between the minimax regret under strong and weak corruptions. We also initiate the study of corrupted adversarial linear bandits, obtaining upper and lower bounds with matching dependencies on the corruption level. Next, we reveal a connection between corruption-robust learning and learning with gap-dependent mis-specification, a setting first studied by Liu et al. (2023a), where the misspecification level of an action or policy is proportional to its suboptimality. We present a general reduction that enables any corruption-robust algorithm to handle gap-dependent misspecification. This allows us to recover the results of Liu et al. (2023a) in a black-box manner and significantly generalize them to settings like linear MDPs, yielding the first results for gap-dependent misspecification in reinforcement learning. However, this general reduction does not attain the optimal rate for gap-dependent misspecification. Motivated by this, we develop a specialized algorithm that achieves optimal bounds for gap-dependent misspecification in linear bandits, thus answering an open question posed by Liu et al. (2023a).<|reference_end|> | arxiv | @article{liu2024corruption-robust,
title={Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent
Misspecification},
author={Haolin Liu, Artin Tajdini, Andrew Wagenmaker, Chen-Yu Wei},
journal={arXiv preprint arXiv:2410.07533},
year={2024},
archivePrefix={arXiv},
eprint={2410.07533},
primaryClass={cs.LG stat.ML}
} | liu2024corruption-robust |
arxiv-667872 | 2410.07535 | Constraint representation towards precise data-driven storytelling | <|reference_start|>Constraint representation towards precise data-driven storytelling: Data-driven storytelling serves as a crucial bridge for communicating ideas in a persuasive way. However, the manual creation of data stories is a multifaceted, labor-intensive, and case-specific effort, limiting their broader application. As a result, automating the creation of data stories has emerged as a significant research thrust. Despite advances in Artificial Intelligence, the systematic generation of data stories remains challenging due to their hybrid nature: they must frame a perspective based on a seed idea in a top-down manner, similar to traditional storytelling, while coherently grounding insights of given evidence in a bottom-up fashion, akin to data analysis. These dual requirements necessitate precise constraints on the permissible space of a data story. In this viewpoint, we propose integrating constraints into the data story generation process. Defined upon the hierarchies of interpretation and articulation, constraints shape both narrations and illustrations to align with seed ideas and contextualized evidence. We identify the taxonomy and required functionalities of these constraints. Although constraints can be heterogeneous and latent, we explore the potential to represent them in a computation-friendly fashion via Domain-Specific Languages. We believe that leveraging constraints will facilitate both artistic and scientific aspects of data story generation.<|reference_end|> | arxiv | @article{shi2024constraint,
title={Constraint representation towards precise data-driven storytelling},
author={Yu-Zhe Shi, Haotian Li, Lecheng Ruan, Huamin Qu},
journal={arXiv preprint arXiv:2410.07535},
year={2024},
archivePrefix={arXiv},
eprint={2410.07535},
primaryClass={cs.HC}
} | shi2024constraint |
arxiv-667873 | 2410.07536 | I-Max: Maximize the Resolution Potential of Pre-trained Rectified Flow Transformers with Projected Flow | <|reference_start|>I-Max: Maximize the Resolution Potential of Pre-trained Rectified Flow Transformers with Projected Flow: Rectified Flow Transformers (RFTs) offer superior training and inference efficiency, making them likely the most viable direction for scaling up diffusion models. However, progress in generation resolution has been relatively slow due to data quality and training costs. Tuning-free resolution extrapolation presents an alternative, but current methods often reduce generative stability, limiting practical application. In this paper, we review existing resolution extrapolation methods and introduce the I-Max framework to maximize the resolution potential of Text-to-Image RFTs. I-Max features: (i) a novel Projected Flow strategy for stable extrapolation and (ii) an advanced inference toolkit for generalizing model knowledge to higher resolutions. Experiments with Lumina-Next-2K and Flux.1-dev demonstrate I-Max's ability to enhance stability in resolution extrapolation and show that it can bring image detail emergence and artifact correction, confirming the practical value of tuning-free resolution extrapolation.<|reference_end|> | arxiv | @article{du2024i-max:,
title={I-Max: Maximize the Resolution Potential of Pre-trained Rectified Flow
Transformers with Projected Flow},
author={Ruoyi Du, Dongyang Liu, Le Zhuo, Qin Qi, Hongsheng Li, Zhanyu Ma, Peng
Gao},
journal={arXiv preprint arXiv:2410.07536},
year={2024},
archivePrefix={arXiv},
eprint={2410.07536},
primaryClass={cs.CV}
} | du2024i-max: |
arxiv-667874 | 2410.07537 | Understanding the AI-powered Binary Code Similarity Detection | <|reference_start|>Understanding the AI-powered Binary Code Similarity Detection: AI-powered binary code similarity detection (BinSD), which transforms intricate binary code comparison to the distance measure of code embedding through neural networks, has been widely applied to program analysis. However, due to the diversity of the adopted embedding strategies, evaluation methodologies, running environments, and/or benchmarks, it is difficult to quantitatively understand to what extent the BinSD problem has been solved, especially in realworld applications. Moreover, the lack of an in-depth investigation of the increasingly complex embedding neural networks and various evaluation methodologies has become the key factor hindering the development of AI-powered BinSD. To fill these research gaps, in this paper, we present a systematic evaluation of state-of-the-art AI-powered BinSD approaches by conducting a comprehensive comparison of BinSD systems on similar function detection and two downstream applications, namely vulnerability search and license violation detection. Building upon this evaluation, we perform the first investigation of embedding neural networks and evaluation methodologies. The experimental results yield several findings, which provide valuable insights in the BinSD domain, including (1) despite the GNN-based BinSD systems currently achieving the best performance in similar function detection, there still exists considerable space for improvements;(2) the capability of AI-powered BinSD approaches exhibits significant variation when applied to different downstream applications;(3) existing evaluation methodologies still need substantial adjustments. For instance, the evaluation metrics (such as the widely adopted ROC and AUC) usually fall short of accurately representing the model performance of the practical use in realworld scenarios. Based on the extensive experiments and analysis, we further provide several promising future research directions.<|reference_end|> | arxiv | @article{fu2024understanding,
title={Understanding the AI-powered Binary Code Similarity Detection},
author={Lirong Fu, Peiyu Liu, Wenlong Meng, Kangjie Lu, Shize Zhou, Xuhong
Zhang, Wenzhi Chen, Shouling Ji},
journal={arXiv preprint arXiv:2410.07537},
year={2024},
archivePrefix={arXiv},
eprint={2410.07537},
primaryClass={cs.SE}
} | fu2024understanding |
arxiv-667875 | 2410.07538 | Rank Aggregation in Crowdsourcing for Listwise Annotations | <|reference_start|>Rank Aggregation in Crowdsourcing for Listwise Annotations: Rank aggregation through crowdsourcing has recently gained significant attention, particularly in the context of listwise ranking annotations. However, existing methods primarily focus on a single problem and partial ranks, while the aggregation of listwise full ranks across numerous problems remains largely unexplored. This scenario finds relevance in various applications, such as model quality assessment and reinforcement learning with human feedback. In light of practical needs, we propose LAC, a Listwise rank Aggregation method in Crowdsourcing, where the global position information is carefully measured and included. In our design, an especially proposed annotation quality indicator is employed to measure the discrepancy between the annotated rank and the true rank. We also take the difficulty of the ranking problem itself into consideration, as it directly impacts the performance of annotators and consequently influences the final results. To our knowledge, LAC is the first work to directly deal with the full rank aggregation problem in listwise crowdsourcing, and simultaneously infer the difficulty of problems, the ability of annotators, and the ground-truth ranks in an unsupervised way. To evaluate our method, we collect a real-world business-oriented dataset for paragraph ranking. Experimental results on both synthetic and real-world benchmark datasets demonstrate the effectiveness of our proposed LAC method.<|reference_end|> | arxiv | @article{luo2024rank,
title={Rank Aggregation in Crowdsourcing for Listwise Annotations},
author={Wenshui Luo, Haoyu Liu, Yongliang Ding, Tao Zhou, Sheng wan, Runze Wu,
Minmin Lin, Cong Zhang, Changjie Fan, Chen Gong},
journal={arXiv preprint arXiv:2410.07538},
year={2024},
archivePrefix={arXiv},
eprint={2410.07538},
primaryClass={cs.LG}
} | luo2024rank |
arxiv-667876 | 2410.07539 | Efficient Generation of Molecular Clusters with Dual-Scale Equivariant Flow Matching | <|reference_start|>Efficient Generation of Molecular Clusters with Dual-Scale Equivariant Flow Matching: Amorphous molecular solids offer a promising alternative to inorganic semiconductors, owing to their mechanical flexibility and solution processability. The packing structure of these materials plays a crucial role in determining their electronic and transport properties, which are key to enhancing the efficiency of devices like organic solar cells (OSCs). However, obtaining these optoelectronic properties computationally requires molecular dynamics (MD) simulations to generate a conformational ensemble, a process that can be computationally expensive due to the large system sizes involved. Recent advances have focused on using generative models, particularly flow-based models as Boltzmann generators, to improve the efficiency of MD sampling. In this work, we developed a dual-scale flow matching method that separates training and inference into coarse-grained and all-atom stages and enhances both the accuracy and efficiency of standard flow matching samplers. We demonstrate the effectiveness of this method on a dataset of Y6 molecular clusters obtained through MD simulations, and we benchmark its efficiency and accuracy against single-scale flow matching methods.<|reference_end|> | arxiv | @article{subramanian2024efficient,
title={Efficient Generation of Molecular Clusters with Dual-Scale Equivariant
Flow Matching},
author={Akshay Subramanian, Shuhui Qu, Cheol Woo Park, Sulin Liu, Janghwan
Lee, Rafael G'omez-Bombarelli},
journal={arXiv preprint arXiv:2410.07539},
year={2024},
archivePrefix={arXiv},
eprint={2410.07539},
primaryClass={cond-mat.mtrl-sci cs.AI}
} | subramanian2024efficient |
arxiv-667877 | 2410.07540 | CoPESD: A Multi-Level Surgical Motion Dataset for Training Large Vision-Language Models to Co-Pilot Endoscopic Submucosal Dissection | <|reference_start|>CoPESD: A Multi-Level Surgical Motion Dataset for Training Large Vision-Language Models to Co-Pilot Endoscopic Submucosal Dissection: submucosal dissection (ESD) enables rapid resection of large lesions, minimizing recurrence rates and improving long-term overall survival. Despite these advantages, ESD is technically challenging and carries high risks of complications, necessitating skilled surgeons and precise instruments. Recent advancements in Large Visual-Language Models (LVLMs) offer promising decision support and predictive planning capabilities for robotic systems, which can augment the accuracy of ESD and reduce procedural risks. However, existing datasets for multi-level fine-grained ESD surgical motion understanding are scarce and lack detailed annotations. In this paper, we design a hierarchical decomposition of ESD motion granularity and introduce a multi-level surgical motion dataset (CoPESD) for training LVLMs as the robotic \textbf{Co}-\textbf{P}ilot of \textbf{E}ndoscopic \textbf{S}ubmucosal \textbf{D}issection. CoPESD includes 17,679 images with 32,699 bounding boxes and 88,395 multi-level motions, from over 35 hours of ESD videos for both robot-assisted and conventional surgeries. CoPESD enables granular analysis of ESD motions, focusing on the complex task of submucosal dissection. Extensive experiments on the LVLMs demonstrate the effectiveness of CoPESD in training LVLMs to predict following surgical robotic motions. As the first multimodal ESD motion dataset, CoPESD supports advanced research in ESD instruction-following and surgical automation. The dataset is available at \href{https://github.com/gkw0010/CoPESD}{https://github.com/gkw0010/CoPESD.}}<|reference_end|> | arxiv | @article{wang2024copesd:,
title={CoPESD: A Multi-Level Surgical Motion Dataset for Training Large
Vision-Language Models to Co-Pilot Endoscopic Submucosal Dissection},
author={Guankun Wang, Han Xiao, Huxin Gao, Renrui Zhang, Long Bai, Xiaoxiao
Yang, Zhen Li, Hongsheng Li, Hongliang Ren},
journal={arXiv preprint arXiv:2410.07540},
year={2024},
archivePrefix={arXiv},
eprint={2410.07540},
primaryClass={cs.CV}
} | wang2024copesd: |
arxiv-667878 | 2410.07542 | Generalizable Indoor Human Activity Recognition Method Based on Micro-Doppler Corner Point Cloud and Dynamic Graph Learning | <|reference_start|>Generalizable Indoor Human Activity Recognition Method Based on Micro-Doppler Corner Point Cloud and Dynamic Graph Learning: Through-the-wall radar (TWR) human activity recognition can be achieved by fusing micro-Doppler signature extraction and intelligent decision-making algorithms. However, limited by the insufficient priori of tester in practical indoor scenarios, the trained models on one tester are commonly difficult to inference well on other testers, which causes poor generalization ability. To solve this problem, this paper proposes a generalizable indoor human activity recognition method based on micro-Doppler corner point cloud and dynamic graph learning. In the proposed method, DoG-{\mu}D-CornerDet is used for micro-Doppler corner extraction on two types of radar profiles. Then, a micro-Doppler corner filtering method based on polynomial fitting smoothing is proposed to maximize the feature distance under the constraints of the kinematic model. The extracted corners from the two types of radar profiles are concatenated together into three-dimensional point cloud. Finally, the paper proposes a dynamic graph neural network (DGNN)-based recognition method for data-to-activity label mapping. Visualization, comparison and ablation experiments are carried out to verify the effectiveness of the proposed method. The results prove that the proposed method has strong generalization ability on radar data collected from different testers.<|reference_end|> | arxiv | @article{yang2024generalizable,
title={Generalizable Indoor Human Activity Recognition Method Based on
Micro-Doppler Corner Point Cloud and Dynamic Graph Learning},
author={Xiaopeng Yang, Weicheng Gao, Xiaodong Qu, Haoyu Meng},
journal={arXiv preprint arXiv:2410.07542},
year={2024},
archivePrefix={arXiv},
eprint={2410.07542},
primaryClass={eess.SP cs.AI}
} | yang2024generalizable |
arxiv-667879 | 2410.07543 | Generalization Ability Analysis of Through-the-Wall Radar Human Activity Recognition | <|reference_start|>Generalization Ability Analysis of Through-the-Wall Radar Human Activity Recognition: Through-the-Wall radar (TWR) human activity recognition (HAR) is a technology that uses low-frequency ultra-wideband (UWB) signal to detect and analyze indoor human motion. However, the high dependence of existing end-to-end recognition models on the distribution of TWR training data makes it difficult to achieve good generalization across different indoor testers. In this regard, the generalization ability of TWR HAR is analyzed in this paper. In detail, an end-to-end linear neural network method for TWR HAR and its generalization error bound are first discussed. Second, a micro-Doppler corner representation method and the change of the generalization error before and after dimension reduction are presented. The appropriateness of the theoretical generalization errors is proved through numerical simulations and experiments. The results demonstrate that feature dimension reduction is effective in allowing recognition models to generalize across different indoor testers.<|reference_end|> | arxiv | @article{gao2024generalization,
title={Generalization Ability Analysis of Through-the-Wall Radar Human Activity
Recognition},
author={Weicheng Gao, Xiaodong Qu, Xiaopeng Yang},
journal={arXiv preprint arXiv:2410.07543},
year={2024},
archivePrefix={arXiv},
eprint={2410.07543},
primaryClass={eess.SP cs.AI}
} | gao2024generalization |
arxiv-667880 | 2410.07545 | Calibration of 3D Single-pixel Imaging Systems with a Calibration Field | <|reference_start|>Calibration of 3D Single-pixel Imaging Systems with a Calibration Field: 3D single-pixel imaging (SPI) is a promising imaging technique that can be ffexibly applied to various wavebands. The main challenge in 3D SPI is that the calibration usually requires a large number of standard points as references, which are tricky to capture using single-pixel detectors. Conventional solutions involve sophisticated device deployment and cumbersome operations, resulting in hundreds of images needed for calibration. In our work, we construct a Calibration Field (CaliF) to efffciently generate the standard points from one single image. A high accuracy of the CaliF is guaranteed by the technique of deep learning and digital twin. We perform experiments with our new method to verify its validity and accuracy. We believe our work holds great potential in 3D SPI systems or even general imaging systems.<|reference_end|> | arxiv | @article{ma2024calibration,
title={Calibration of 3D Single-pixel Imaging Systems with a Calibration Field},
author={Xinyue Ma and Chenxing Wang},
journal={arXiv preprint arXiv:2410.07545},
year={2024},
archivePrefix={arXiv},
eprint={2410.07545},
primaryClass={eess.IV cs.CV}
} | ma2024calibration |
arxiv-667881 | 2410.07546 | The BRAM is the Limit: Shattering Myths, Shaping Standards, and Building Scalable PIM Accelerators | <|reference_start|>The BRAM is the Limit: Shattering Myths, Shaping Standards, and Building Scalable PIM Accelerators: Many recent FPGA-based Processor-in-Memory (PIM) architectures have appeared with promises of impressive levels of parallelism but with performance that falls short of expectations due to reduced maximum clock frequencies, an inability to scale processing elements up to the maximum BRAM capacity, and minimal hardware support for large reduction operations. In this paper, we first establish what we believe should be a "Gold Standard" set of design objectives for PIM-based FPGA designs. This Gold Standard was established to serve as an absolute metric for comparing PIMs developed on different technology nodes and vendor families as well as an aspirational goal for designers. We then present IMAGine, an In-Memory Accelerated GEMV engine used as a case study to show the Gold Standard can be realized in practice. IMAGine serves as an existence proof that dispels several myths surrounding what is normally accepted as clocking and scaling FPGA performance limitations. Specifically, IMAGine clocks at the maximum frequency of the BRAM and scales to 100% of the available BRAMs. Comparative analyses are presented showing execution speeds over existing PIM-based GEMV engines on FPGAs and achieving a 2.65x - 3.2x faster clock. An AMD Alveo U55 implementation is presented that achieves a system clock speed of 737 MHz, providing 64K bit-serial multiply-accumulate (MAC) units for GEMV operation. This establishes IMAGine as the fastest PIM-based GEMV overlay, outperforming even the custom PIM-based FPGA accelerators reported to date. Additionally, it surpasses TPU v1-v2 and Alibaba Hanguang 800 in clock speed while offering an equal or greater number of MAC units.<|reference_end|> | arxiv | @article{kabir2024the,
title={The BRAM is the Limit: Shattering Myths, Shaping Standards, and Building
Scalable PIM Accelerators},
author={MD Arafat Kabir, Tendayi Kamucheka, Nathaniel Fredricks, Joel Mandebi,
Jason Bakos, Miaoqing Huang, David Andrews},
journal={arXiv preprint arXiv:2410.07546},
year={2024},
doi={10.1109/FCCM60383.2024.00045},
archivePrefix={arXiv},
eprint={2410.07546},
primaryClass={cs.AR}
} | kabir2024the |
arxiv-667882 | 2410.07547 | Comprehensive Online Training and Deployment for Spiking Neural Networks | <|reference_start|>Comprehensive Online Training and Deployment for Spiking Neural Networks: Spiking Neural Networks (SNNs) are considered to have enormous potential in the future development of Artificial Intelligence (AI) due to their brain-inspired and energy-efficient properties. In the current supervised learning domain of SNNs, compared to vanilla Spatial-Temporal Back-propagation (STBP) training, online training can effectively overcome the risk of GPU memory explosion and has received widespread academic attention. However, the current proposed online training methods cannot tackle the inseparability problem of temporal dependent gradients and merely aim to optimize the training memory, resulting in no performance advantages compared to the STBP training models in the inference phase. To address the aforementioned challenges, we propose Efficient Multi-Precision Firing (EM-PF) model, which is a family of advanced spiking models based on floating-point spikes and binary synaptic weights. We point out that EM-PF model can effectively separate temporal gradients and achieve full-stage optimization towards computation speed and memory footprint. Experimental results have demonstrated that EM-PF model can be flexibly combined with various techniques including random back-propagation, parallel computation and channel attention mechanism, to achieve state-of-the-art performance with extremely low computational overhead in the field of online learning.<|reference_end|> | arxiv | @article{hao2024comprehensive,
title={Comprehensive Online Training and Deployment for Spiking Neural Networks},
author={Zecheng Hao, Yifan Huang, Zijie Xu, Zhaofei Yu, Tiejun Huang},
journal={arXiv preprint arXiv:2410.07547},
year={2024},
archivePrefix={arXiv},
eprint={2410.07547},
primaryClass={cs.NE cs.AI}
} | hao2024comprehensive |
arxiv-667883 | 2410.07548 | Hybrid Summary Statistics | <|reference_start|>Hybrid Summary Statistics: We present a way to capture high-information posteriors from training sets that are sparsely sampled over the parameter space for robust simulation-based inference. In physical inference problems, we can often apply domain knowledge to define traditional summary statistics to capture some of the information in a dataset. We show that augmenting these statistics with neural network outputs to maximise the mutual information improves information extraction compared to neural summaries alone or their concatenation to existing summaries and makes inference robust in settings with low training data. We introduce 1) two loss formalisms to achieve this and 2) apply the technique to two different cosmological datasets to extract non-Gaussian parameter information.<|reference_end|> | arxiv | @article{makinen2024hybrid,
title={Hybrid Summary Statistics},
author={T. Lucas Makinen, Ce Sui, Benjamin D. Wandelt, Natalia Porqueres, Alan
Heavens},
journal={arXiv preprint arXiv:2410.07548},
year={2024},
archivePrefix={arXiv},
eprint={2410.07548},
primaryClass={stat.ML astro-ph.CO cs.IT cs.LG math.IT physics.data-an}
} | makinen2024hybrid |
arxiv-667884 | 2410.07549 | OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting | <|reference_start|>OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting: Entity Linking (EL) is the process of associating ambiguous textual mentions to specific entities in a knowledge base. Traditional EL methods heavily rely on large datasets to enhance their performance, a dependency that becomes problematic in the context of few-shot entity linking, where only a limited number of examples are available for training. To address this challenge, we present OneNet, an innovative framework that utilizes the few-shot learning capabilities of Large Language Models (LLMs) without the need for fine-tuning. To the best of our knowledge, this marks a pioneering approach to applying LLMs to few-shot entity linking tasks. OneNet is structured around three key components prompted by LLMs: (1) an entity reduction processor that simplifies inputs by summarizing and filtering out irrelevant entities, (2) a dual-perspective entity linker that combines contextual cues and prior knowledge for precise entity linking, and (3) an entity consensus judger that employs a unique consistency algorithm to alleviate the hallucination in the entity linking reasoning. Comprehensive evaluations across seven benchmark datasets reveal that OneNet outperforms current state-of-the-art entity linking methods.<|reference_end|> | arxiv | @article{liu2024onenet:,
title={OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via
Large Language Model Prompting},
author={Xukai Liu, Ye Liu, Kai Zhang, Kehang Wang, Qi Liu, Enhong Chen},
journal={arXiv preprint arXiv:2410.07549},
year={2024},
archivePrefix={arXiv},
eprint={2410.07549},
primaryClass={cs.CL cs.AI}
} | liu2024onenet: |
arxiv-667885 | 2410.07550 | Conditional Lagrangian Wasserstein Flow for Time Series Imputation | <|reference_start|>Conditional Lagrangian Wasserstein Flow for Time Series Imputation: Time series imputation is important for numerous real-world applications. To overcome the limitations of diffusion model-based imputation methods, e.g., slow convergence in inference, we propose a novel method for time series imputation in this work, called Conditional Lagrangian Wasserstein Flow. The proposed method leverages the (conditional) optimal transport theory to learn the probability flow in a simulation-free manner, in which the initial noise, missing data, and observations are treated as the source distribution, target distribution, and conditional information, respectively. According to the principle of least action in Lagrangian mechanics, we learn the velocity by minimizing the corresponding kinetic energy. Moreover, to incorporate more prior information into the model, we parameterize the derivative of a task-specific potential function via a variational autoencoder, and combine it with the base estimator to formulate a Rao-Blackwellized sampler. The propose model allows us to take less intermediate steps to produce high-quality samples for inference compared to existing diffusion methods. Finally, the experimental results on the real-word datasets show that the proposed method achieves competitive performance on time series imputation compared to the state-of-the-art methods.<|reference_end|> | arxiv | @article{qian2024conditional,
title={Conditional Lagrangian Wasserstein Flow for Time Series Imputation},
author={Weizhu Qian, Dalin Zhang, Yan Zhao},
journal={arXiv preprint arXiv:2410.07550},
year={2024},
archivePrefix={arXiv},
eprint={2410.07550},
primaryClass={cs.LG stat.ML}
} | qian2024conditional |
arxiv-667886 | 2410.07551 | KRAG Framework for Enhancing LLMs in the Legal Domain | <|reference_start|>KRAG Framework for Enhancing LLMs in the Legal Domain: This paper introduces Knowledge Representation Augmented Generation (KRAG), a novel framework designed to enhance the capabilities of Large Language Models (LLMs) within domain-specific applications. KRAG points to the strategic inclusion of critical knowledge entities and relationships that are typically absent in standard data sets and which LLMs do not inherently learn. In the context of legal applications, we present Soft PROLEG, an implementation model under KRAG, which uses inference graphs to aid LLMs in delivering structured legal reasoning, argumentation, and explanations tailored to user inquiries. The integration of KRAG, either as a standalone framework or in tandem with retrieval augmented generation (RAG), markedly improves the ability of language models to navigate and solve the intricate challenges posed by legal texts and terminologies. This paper details KRAG's methodology, its implementation through Soft PROLEG, and potential broader applications, underscoring its significant role in advancing natural language understanding and processing in specialized knowledge domains.<|reference_end|> | arxiv | @article{thanh2024krag,
title={KRAG Framework for Enhancing LLMs in the Legal Domain},
author={Nguyen Ha Thanh, Ken Satoh},
journal={arXiv preprint arXiv:2410.07551},
year={2024},
number={NeLaMKRR/2024/02},
archivePrefix={arXiv},
eprint={2410.07551},
primaryClass={cs.CL cs.AI}
} | thanh2024krag |
arxiv-667887 | 2410.07552 | Methods for Few-View CT Image Reconstruction | <|reference_start|>Methods for Few-View CT Image Reconstruction: Computed Tomography (CT) is an essential non-destructive three dimensional imaging modality used in medicine, security screening, and inspection of manufactured components. Typical CT data acquisition entails the collection of a thousand or more projections through the object under investigation through a range of angles covering one hundred eighty degrees or more. It may be desirable or required that the number of projections angles be reduced by one or two orders of magnitude for reasons such as acquisition time or dose. Unless specialized reconstruction algorithms are applied, reconstructing with fewer views will result in streak artifacts and failure to resolve object boundaries at certain orientations. These artifacts may substantially diminish the usefulness of the reconstructed CT volumes. Here we develop constrained and regularized numerical optimization methods to reconstruct CT volumes from 4-28 projections. These methods entail utilization of novel data fidelity and convex and non-convex regularization terms. In addition, the methods outlined here are usually carried out by a sequence of two or three numerical optimization methods in sequence. The efficacy of our methods is demonstrated on four measured and three simulated few-view CT data sets. We show that these methods outperform other state of the art few-view numerical optimization methods.<|reference_end|> | arxiv | @article{champley2024methods,
title={Methods for Few-View CT Image Reconstruction},
author={Kyle M. Champley, Michael B. Zellner, Joseph W. Tringe, and Harry E.
Martz Jr},
journal={arXiv preprint arXiv:2410.07552},
year={2024},
archivePrefix={arXiv},
eprint={2410.07552},
primaryClass={physics.med-ph cs.MS physics.comp-ph}
} | champley2024methods |
arxiv-667888 | 2410.07553 | COMMA: A Communicative Multimodal Multi-Agent Benchmark | <|reference_start|>COMMA: A Communicative Multimodal Multi-Agent Benchmark: The rapid advances of multi-modal agents built on large foundation models have largely overlooked their potential for language-based communication between agents in collaborative tasks. This oversight presents a critical gap in understanding their effectiveness in real-world deployments, particularly when communicating with humans. Existing agentic benchmarks fail to address key aspects of inter-agent communication and collaboration, particularly in scenarios where agents have unequal access to information and must work together to achieve tasks beyond the scope of individual capabilities. To fill this gap, we introduce a novel benchmark designed to evaluate the collaborative performance of multimodal multi-agent systems through language communication. Our benchmark features a variety of scenarios, providing a comprehensive evaluation across four key categories of agentic capability in a communicative collaboration setting. By testing both agent-agent and agent-human collaborations using open-source and closed-source models, our findings reveal surprising weaknesses in state-of-the-art models, including proprietary models like GPT-4o. These models struggle to outperform even a simple random agent baseline in agent-agent collaboration and only surpass the random baseline when a human is involved.<|reference_end|> | arxiv | @article{ossowski2024comma:,
title={COMMA: A Communicative Multimodal Multi-Agent Benchmark},
author={Timothy Ossowski, Jixuan Chen, Danyal Maqbool, Zefan Cai, Tyler
Bradshaw, Junjie Hu},
journal={arXiv preprint arXiv:2410.07553},
year={2024},
archivePrefix={arXiv},
eprint={2410.07553},
primaryClass={cs.AI}
} | ossowski2024comma: |
arxiv-667889 | 2410.07554 | Force-Centric Imitation Learning with Force-Motion Capture System for Contact-Rich Manipulation | <|reference_start|>Force-Centric Imitation Learning with Force-Motion Capture System for Contact-Rich Manipulation: In most contact-rich manipulation tasks, humans apply time-varying forces to the target object, compensating for inaccuracies in the vision-guided hand trajectory. However, current robot learning algorithms primarily focus on trajectory-based policy, with limited attention given to learning force-related skills. To address this limitation, we introduce ForceMimic, a force-centric robot learning system, providing a natural, force-aware and robot-free robotic demonstration collection system, along with a hybrid force-motion imitation learning algorithm for robust contact-rich manipulation. Using the proposed ForceCapture system, an operator can peel a zucchini in 5 minutes, while force-feedback teleoperation takes over 13 minutes and struggles with task completion. With the collected data, we propose HybridIL to train a force-centric imitation learning model, equipped with hybrid force-position control primitive to fit the predicted wrench-position parameters during robot execution. Experiments demonstrate that our approach enables the model to learn a more robust policy under the contact-rich task of vegetable peeling, increasing the success rates by 54.5% relatively compared to state-of-the-art pure-vision-based imitation learning. Hardware, code, data and more results would be open-sourced on the project website at https://forcemimic.github.io.<|reference_end|> | arxiv | @article{liu2024forcemimic:,
title={ForceMimic: Force-Centric Imitation Learning with Force-Motion Capture
System for Contact-Rich Manipulation},
author={Wenhai Liu, Junbo Wang, Yiming Wang, Weiming Wang, Cewu Lu},
journal={arXiv preprint arXiv:2410.07554},
year={2024},
archivePrefix={arXiv},
eprint={2410.07554},
primaryClass={cs.RO}
} | liu2024forcemimic: |
arxiv-667890 | 2410.07558 | Streamlined shape of cyborg cockroach promotes traversability in confined environments by gap negotiation | <|reference_start|>Streamlined shape of cyborg cockroach promotes traversability in confined environments by gap negotiation: The centimeter-scale cyborg insects have a potential advantage for application in narrow environments where humans cannot operate. To realize such tasks, researchers have developed a small printed-circuit-board (PCB) which an insect can carry and control it. The electronic components usually remain bare on the board and the whole board is mounted on platform animals, resulting in uneven morphology of whole cyborg with sharp edges. It is well known that streamlined body shape in artificial vehicles or robots contributes to effective locomotion by reducing drag force in media. However, little is known how the entire body shape impacts on locomotor performance of cyborg insect. Here, we developed a 10 mm by 10 mm board which provided electrical stimulation via Sub-GHz communication and investigated the impact of physical arrangement of the board using Madagascar hissing cockroach. We compared the success rate of gap negotiation between the cyborg with mounted board and implanted board and found the latter outperformed the former. We demonstrated our cyborg cockroach with implanted board could follow faithfully to the locomotion command via antennal or cercal stimulation and traverse a narrow gap like air vent cover. In contrast to the conventional arrangement, our cyborg insects are suitable for application in a concealed environment.<|reference_end|> | arxiv | @article{kai2024streamlined,
title={Streamlined shape of cyborg cockroach promotes traversability in
confined environments by gap negotiation},
author={Kazuki Kai, Le Duc Long, Hirotaka Sato},
journal={arXiv preprint arXiv:2410.07558},
year={2024},
archivePrefix={arXiv},
eprint={2410.07558},
primaryClass={cs.RO}
} | kai2024streamlined |
arxiv-667891 | 2410.07560 | From student to working professional: A graduate survey | <|reference_start|>From student to working professional: A graduate survey: This paper reports on the results of a 2023 survey that explores the Work Integrated Learning (WiL) experiences of thirty recent Computer Science (CS) graduates. The graduates had all completed their undergraduate bachelors degree within the last five years and were currently employed in a CS industry role. The survey asked about the graduates' perceptions within a continuum of WiL experiences from final year capstone projects to professional development in their first industry-based role. Most respondents had taken a capstone course involving a team project. Only two respondents had participated in an internship program. Our results indicate that graduates value their capstone experiences and believe that they provide transferable skills including teamwork, managing client relations, exposure to technologies and methods, and time management. When entering their first industry role less than fifty percent of graduates were allocated a mentor. Overwhelmingly, these graduates noted the importance of those mentors in their transition from student to working professional. Very few of the surveyed graduates were provided with ongoing professional development opportunities. Those who did noted significant gains including growth of leadership skills and accelerated career progression. Our survey highlights a gap and an opportunity for tertiary institutions to work with industry to provide graduate onboarding and novice/early-career professional development opportunities.<|reference_end|> | arxiv | @article{whalley2024from,
title={From student to working professional: A graduate survey},
author={Jacqueline Whalley, Asanthika Imbulpitiya, Tony Clear, Harley Ogier},
journal={arXiv preprint arXiv:2410.07560},
year={2024},
archivePrefix={arXiv},
eprint={2410.07560},
primaryClass={cs.CY}
} | whalley2024from |
arxiv-667892 | 2410.07561 | AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models | <|reference_start|>AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models: The rise of various social platforms has transformed journalism. The growing demand for news content has led to the increased use of large language models (LLMs) in news production due to their speed and cost-effectiveness. However, LLMs still encounter limitations in professionalism and ethical judgment in news generation. Additionally, predicting public feedback is usually difficult before news is released. To tackle these challenges, we introduce AI-Press, an automated news drafting and polishing system based on multi-agent collaboration and Retrieval-Augmented Generation. We develop a feedback simulation system that generates public feedback considering demographic distributions. Through extensive quantitative and qualitative evaluations, our system shows significant improvements in news-generating capabilities and verifies the effectiveness of public feedback simulation.<|reference_end|> | arxiv | @article{liu2024ai-press:,
title={AI-Press: A Multi-Agent News Generating and Feedback Simulation System
Powered by Large Language Models},
author={Xiawei Liu, Shiyue Yang, Xinnong Zhang, Haoyu Kuang, Libo Sun, Yihang
Yang, Siming Chen, Xuanjing Huang, Zhongyu Wei},
journal={arXiv preprint arXiv:2410.07561},
year={2024},
archivePrefix={arXiv},
eprint={2410.07561},
primaryClass={cs.CL}
} | liu2024ai-press: |
arxiv-667893 | 2410.07563 | PLaMo-100B: A Ground-Up Language Model Designed for Japanese Proficiency | <|reference_start|>PLaMo-100B: A Ground-Up Language Model Designed for Japanese Proficiency: We introduce PLaMo-100B, a large-scale language model designed for Japanese proficiency. The model was trained from scratch using 2 trillion tokens, with architecture such as QK Normalization and Z-Loss to ensure training stability during the training process. Post-training techniques, including Supervised Fine-Tuning and Direct Preference Optimization, were applied to refine the model's performance. Benchmark evaluations suggest that PLaMo-100B performs well, particularly in Japanese-specific tasks, achieving results that are competitive with frontier models like GPT-4.<|reference_end|> | arxiv | @article{elements2024plamo-100b:,
title={PLaMo-100B: A Ground-Up Language Model Designed for Japanese Proficiency},
author={Preferred Elements: Kenshin Abe, Kaizaburo Chubachi, Yasuhiro Fujita,
Yuta Hirokawa, Kentaro Imajo, Toshiki Kataoka, Hiroyoshi Komatsu, Hiroaki
Mikami, Tsuguo Mogami, Shogo Murai, Kosuke Nakago, Daisuke Nishino, Toru
Ogawa, Daisuke Okanohara, Yoshihiko Ozaki, Shotaro Sano, Shuji Suzuki, Tianqi
Xu, Toshihiko Yanase},
journal={arXiv preprint arXiv:2410.07563},
year={2024},
archivePrefix={arXiv},
eprint={2410.07563},
primaryClass={cs.CL cs.AI cs.LG}
} | elements2024plamo-100b: |
arxiv-667894 | 2410.07564 | Boosting Deep Ensembles with Learning Rate Tuning | <|reference_start|>Boosting Deep Ensembles with Learning Rate Tuning: The Learning Rate (LR) has a high impact on deep learning training performance. A common practice is to train a Deep Neural Network (DNN) multiple times with different LR policies to find the optimal LR policy, which has been widely recognized as a daunting and costly task. Moreover, multiple times of DNN training has not been effectively utilized. In practice, often only the optimal LR is adopted, which misses the opportunities to further enhance the overall accuracy of the deep learning system and results in a huge waste of both computing resources and training time. This paper presents a novel framework, LREnsemble, to effectively leverage effective learning rate tuning to boost deep ensemble performance. We make three original contributions. First, we show that the LR tuning with different LR policies can produce highly diverse DNNs, which can be supplied as base models for deep ensembles. Second, we leverage different ensemble selection algorithms to identify high-quality deep ensembles from the large pool of base models with significant accuracy improvements over the best single base model. Third, we propose LREnsemble, a framework that utilizes the synergy of LR tuning and deep ensemble techniques to enhance deep learning performance. The experiments on multiple benchmark datasets have demonstrated the effectiveness of LREnsemble, generating up to 2.34% accuracy improvements over well-optimized baselines.<|reference_end|> | arxiv | @article{jin2024boosting,
title={Boosting Deep Ensembles with Learning Rate Tuning},
author={Hongpeng Jin, Yanzhao Wu},
journal={arXiv preprint arXiv:2410.07564},
year={2024},
archivePrefix={arXiv},
eprint={2410.07564},
primaryClass={cs.LG}
} | jin2024boosting |
arxiv-667895 | 2410.07566 | Revisiting the Primitives of Transaction Fee Mechanism Design | <|reference_start|>Revisiting the Primitives of Transaction Fee Mechanism Design: Transaction Fee Mechanism Design studies auctions run by untrusted miners for transaction inclusion in a blockchain. Under previously-considered desiderata, an auction is considered `good' if, informally-speaking, each party (i.e., the miner, the users, and coalitions of both miners and users) has no incentive to deviate from the fixed and pre-determined protocol. In this paper, we propose a novel desideratum for transaction fee mechanisms. We say that a TFM is off-chain influence proof when the miner cannot achieve additional revenue by running a separate auction off-chain. While the previously-highlighted EIP-1559 is the gold-standard according to prior desiderata, we show that it does not satisfy off-chain influence proofness. Intuitively, this holds because a Bayesian revenue-maximizing miner can strictly increase profits by persuasively threatening to censor any bids that do not transfer a tip directly to the miner off-chain. On the other hand, we reconsider the Cryptographic (multi-party computation assisted) Second Price Auction mechanism, which is technically not `simple for miners' according to previous desiderata (since miners may wish to set a reserve by fabricating bids). We show that, in a slightly different model where the miner is allowed to set the reserve directly, this auction satisfies simplicity for users and miners, and off-chain influence proofness. Finally, we prove a strong impossibility result: no mechanism satisfies all previously-considered properties along with off-chain influence proofness, even with unlimited supply, and even after soliciting input from the miner.<|reference_end|> | arxiv | @article{ganesh2024revisiting,
title={Revisiting the Primitives of Transaction Fee Mechanism Design},
author={Aadityan Ganesh, Clayton Thomas, S. Matthew Weinberg},
journal={arXiv preprint arXiv:2410.07566},
year={2024},
doi={10.1145/3670865.3673621},
archivePrefix={arXiv},
eprint={2410.07566},
primaryClass={cs.GT econ.TH}
} | ganesh2024revisiting |
arxiv-667896 | 2410.07567 | When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context | <|reference_start|>When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context: We introduce a neural architecture finetuned for the task of scenario context generation: The relevant location and time of an event or entity mentioned in text. Contextualizing information extraction helps to scope the validity of automated finings when aggregating them as knowledge graphs. Our approach uses a high-quality curated dataset of time and location annotations in a corpus of epidemiology papers to train an encoder-decoder architecture. We also explored the use of data augmentation techniques during training. Our findings suggest that a relatively small fine-tuned encoder-decoder model performs better than out-of-the-box LLMs and semantic role labeling parsers to accurate predict the relevant scenario information of a particular entity or event.<|reference_end|> | arxiv | @article{noriega-atala2024when,
title={When and Where Did it Happen? An Encoder-Decoder Model to Identify
Scenario Context},
author={Enrique Noriega-Atala, Robert Vacareanu, Salena Torres Ashton, Adarsh
Pyarelal, Clayton T. Morrison, Mihai Surdeanu},
journal={arXiv preprint arXiv:2410.07567},
year={2024},
archivePrefix={arXiv},
eprint={2410.07567},
primaryClass={cs.CL cs.AI}
} | noriega-atala2024when |
arxiv-667897 | 2410.07571 | How Does Vision-Language Adaptation Impact the Safety of Vision Language Models? | <|reference_start|>How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?: Vision-Language adaptation (VL adaptation) transforms Large Language Models (LLMs) into Large Vision-Language Models (LVLMs) for multimodal tasks, but this process often compromises the inherent safety capabilities embedded in the original LLMs. Despite potential harmfulness due to weakened safety measures, in-depth analysis on the effects of VL adaptation on safety remains under-explored. This study examines how VL adaptation influences safety and evaluates the impact of safety fine-tuning methods. Our analysis reveals that safety degradation occurs during VL adaptation, even when the training data is safe. While safety tuning techniques like supervised fine-tuning with safety datasets or reinforcement learning from human feedback mitigate some risks, they still lead to safety degradation and a reduction in helpfulness due to over-rejection issues. Further analysis of internal model weights suggests that VL adaptation may impact certain safety-related layers, potentially lowering overall safety levels. Additionally, our findings demonstrate that the objectives of VL adaptation and safety tuning are divergent, which often results in their simultaneous application being suboptimal. To address this, we suggest the weight merging approach as an optimal solution effectively reducing safety degradation while maintaining helpfulness. These insights help guide the development of more reliable and secure LVLMs for real-world applications.<|reference_end|> | arxiv | @article{lee2024how,
title={How Does Vision-Language Adaptation Impact the Safety of Vision Language
Models?},
author={Seongyun Lee and Geewook Kim and Jiyeon Kim and Hyunji Lee and Hoyeon
Chang and Sue Hyun Park and Minjoon Seo},
journal={arXiv preprint arXiv:2410.07571},
year={2024},
archivePrefix={arXiv},
eprint={2410.07571},
primaryClass={cs.CL cs.CV}
} | lee2024how |
arxiv-667898 | 2410.07573 | RealVul: Can We Detect Vulnerabilities in Web Applications with LLM? | <|reference_start|>RealVul: Can We Detect Vulnerabilities in Web Applications with LLM?: The latest advancements in large language models (LLMs) have sparked interest in their potential for software vulnerability detection. However, there is currently a lack of research specifically focused on vulnerabilities in the PHP language, and challenges in extracting samples and processing persist, hindering the model's ability to effectively capture the characteristics of specific vulnerabilities. In this paper, we present RealVul, the first LLM-based framework designed for PHP vulnerability detection, addressing these issues. By vulnerability candidate detection methods and employing techniques such as normalization, we can isolate potential vulnerability triggers while streamlining the code and eliminating unnecessary semantic information, enabling the model to better understand and learn from the generated vulnerability samples. We also address the issue of insufficient PHP vulnerability samples by improving data synthesis methods. To evaluate RealVul's performance, we conduct an extensive analysis using five distinct code LLMs on vulnerability data from 180 PHP projects. The results demonstrate a significant improvement in both effectiveness and generalization compared to existing methods, effectively boosting the vulnerability detection capabilities of these models.<|reference_end|> | arxiv | @article{cao2024realvul:,
title={RealVul: Can We Detect Vulnerabilities in Web Applications with LLM?},
author={Di Cao, Yong Liao, Xiuwei Shang},
journal={arXiv preprint arXiv:2410.07573},
year={2024},
archivePrefix={arXiv},
eprint={2410.07573},
primaryClass={cs.CR cs.CL}
} | cao2024realvul: |
arxiv-667899 | 2410.07574 | Gap-Dependent Bounds for Q-Learning using Reference-Advantage Decomposition | <|reference_start|>Gap-Dependent Bounds for Q-Learning using Reference-Advantage Decomposition: We study the gap-dependent bounds of two important algorithms for on-policy Q-learning for finite-horizon episodic tabular Markov Decision Processes (MDPs): UCB-Advantage (Zhang et al. 2020) and Q-EarlySettled-Advantage (Li et al. 2021). UCB-Advantage and Q-EarlySettled-Advantage improve upon the results based on Hoeffding-type bonuses and achieve the almost optimal $\sqrt{T}$-type regret bound in the worst-case scenario, where $T$ is the total number of steps. However, the benign structures of the MDPs such as a strictly positive suboptimality gap can significantly improve the regret. While gap-dependent regret bounds have been obtained for Q-learning with Hoeffding-type bonuses, it remains an open question to establish gap-dependent regret bounds for Q-learning using variance estimators in their bonuses and reference-advantage decomposition for variance reduction. We develop a novel error decomposition framework to prove gap-dependent regret bounds of UCB-Advantage and Q-EarlySettled-Advantage that are logarithmic in $T$ and improve upon existing ones for Q-learning algorithms. Moreover, we establish the gap-dependent bound for the policy switching cost of UCB-Advantage and improve that under the worst-case MDPs. To our knowledge, this paper presents the first gap-dependent regret analysis for Q-learning using variance estimators and reference-advantage decomposition and also provides the first gap-dependent analysis on policy switching cost for Q-learning.<|reference_end|> | arxiv | @article{zheng2024gap-dependent,
title={Gap-Dependent Bounds for Q-Learning using Reference-Advantage
Decomposition},
author={Zhong Zheng, Haochen Zhang, Lingzhou Xue},
journal={arXiv preprint arXiv:2410.07574},
year={2024},
archivePrefix={arXiv},
eprint={2410.07574},
primaryClass={stat.ML cs.LG}
} | zheng2024gap-dependent |
arxiv-667900 | 2410.07575 | Self-Supervised Meta-Learning for All-Layer DNN-Based Adaptive Control with Stability Guarantees | <|reference_start|>Self-Supervised Meta-Learning for All-Layer DNN-Based Adaptive Control with Stability Guarantees: A critical goal of adaptive control is enabling robots to rapidly adapt in dynamic environments. Recent studies have developed a meta-learning-based adaptive control scheme, which uses meta-learning to extract nonlinear features (represented by Deep Neural Networks (DNNs)) from offline data, and uses adaptive control to update linear coefficients online. However, such a scheme is fundamentally limited by the linear parameterization of uncertainties and does not fully unleash the capability of DNNs. This paper introduces a novel learning-based adaptive control framework that pretrains a DNN via self-supervised meta-learning (SSML) from offline trajectories and online adapts the full DNN via composite adaptation. In particular, the offline SSML stage leverages the time consistency in trajectory data to train the DNN to predict future disturbances from history, in a self-supervised manner without environment condition labels. The online stage carefully designs a control law and an adaptation law to update the full DNN with stability guarantees. Empirically, the proposed framework significantly outperforms (19-39%) various classic and learning-based adaptive control baselines, in challenging real-world quadrotor tracking problems under large dynamic wind disturbance.<|reference_end|> | arxiv | @article{he2024self-supervised,
title={Self-Supervised Meta-Learning for All-Layer DNN-Based Adaptive Control
with Stability Guarantees},
author={Guanqi He, Yogita Choudhary, Guanya Shi},
journal={arXiv preprint arXiv:2410.07575},
year={2024},
archivePrefix={arXiv},
eprint={2410.07575},
primaryClass={cs.RO}
} | he2024self-supervised |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.