corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-664101
2410.00706
A Low-Cost, High-Speed, and Robust Bin Picking System for Factory Automation Enabled by a Non-Stop, Multi-View, and Active Vision Scheme
<|reference_start|>A Low-Cost, High-Speed, and Robust Bin Picking System for Factory Automation Enabled by a Non-Stop, Multi-View, and Active Vision Scheme: Bin picking systems in factory automation usually face robustness issues caused by sparse and noisy 3D data of metallic objects. Utilizing multiple views, especially with a one-shot 3D sensor and "sensor on hand" configuration is getting more popularity due to its effectiveness, flexibility, and low cost. While moving the 3D sensor to acquire multiple views for 3D fusion, joint optimization, or active vision suffers from low-speed issues. That is because sensing is taken as a decoupled module from motion tasks and is not intentionally designed for a bin picking system. To address the problems, we designed a bin picking system, which tightly couples a multi-view, active vision scheme with motion tasks in a "sensor on hand" configuration. It not only speeds up the system by parallelizing the high-speed sensing scheme to the robot place action but also decides the next sensing path to maintain the continuity of the whole picking process. Unlike others focusing only on sensing evaluation, we also evaluated our design by picking experiments on 5 different types of objects without human intervention. Our experiments show the whole sensing scheme can be finished within 1.682 seconds (maximum) on CPU and the average picking complete rate is over 97.75%. Due to the parallelization with robot motion, the sensing scheme accounts for only 0.635 seconds in takt time on average.<|reference_end|>
arxiv
@article{fu2024a, title={A Low-Cost, High-Speed, and Robust Bin Picking System for Factory Automation Enabled by a Non-Stop, Multi-View, and Active Vision Scheme}, author={Xingdou Fu, Lin Miao, Yasuhiro Ohnishi, Yuki Hasegawa, Masaki Suwa}, journal={arXiv preprint arXiv:2410.00706}, year={2024}, archivePrefix={arXiv}, eprint={2410.00706}, primaryClass={cs.RO cs.CV} }
fu2024a
arxiv-664102
2410.00708
Hybrid Quantum Neural Network based Indoor User Localization using Cloud Quantum Computing
<|reference_start|>Hybrid Quantum Neural Network based Indoor User Localization using Cloud Quantum Computing: This paper proposes a hybrid quantum neural network (HQNN) for indoor user localization using received signal strength indicator (RSSI) values. We use publicly available RSSI datasets for indoor localization using WiFi, Bluetooth, and Zigbee to test the performance of the proposed HQNN. We also compare the performance of the HQNN with the recently proposed quantum fingerprinting-based user localization method. Our results show that the proposed HQNN performs better than the quantum fingerprinting algorithm since the HQNN has trainable parameters in the quantum circuits, whereas the quantum fingerprinting algorithm uses a fixed quantum circuit to calculate the similarity between the test data point and the fingerprint dataset. Unlike prior works, we also test the performance of the HQNN and quantum fingerprint algorithm on a real IBM quantum computer using cloud quantum computing services. Therefore, this paper examines the performance of the HQNN on noisy intermediate scale (NISQ) quantum devices using real-world RSSI localization datasets. The novelty of our approach lies in the use of simple feature maps and ansatz with fewer neurons, alongside testing on actual quantum hardware using real-world data, demonstrating practical applicability in real-world scenarios.<|reference_end|>
arxiv
@article{mittal2024hybrid, title={Hybrid Quantum Neural Network based Indoor User Localization using Cloud Quantum Computing}, author={Sparsh Mittal, Yash Chand, and Neel Kanth Kundu}, journal={arXiv preprint arXiv:2410.00708}, year={2024}, archivePrefix={arXiv}, eprint={2410.00708}, primaryClass={eess.SP cs.LG} }
mittal2024hybrid
arxiv-664103
2410.00709
Binding Affinity Prediction: From Conventional to Machine Learning-Based Approaches
<|reference_start|>Binding Affinity Prediction: From Conventional to Machine Learning-Based Approaches: Protein-ligand binding is the process by which a small molecule (drug or inhibitor) attaches to a target protein. The binding affinity, which refers to the strength of this interaction, is central to many important problems in bioinformatics such as drug design. An extensive amount of work has been devoted to predicting binding affinity over the past decades due to its significance. In this paper, we review all significant recent works, focusing on the methods, features, and benchmark datasets. We have observed a rising trend in the use of traditional machine learning and deep learning models for predicting binding affinity, accompanied by an increasing amount of data on proteins and small drug-like molecules. While prediction results are constantly improving, we also identify several open questions and potential directions that remain unexplored in the field. This paper could serve as an excellent starting point for machine learning researchers who wish to engage in the study of binding affinity, or for anyone with general interests in machine learning, drug discovery, and bioinformatics.<|reference_end|>
arxiv
@article{liu2024binding, title={Binding Affinity Prediction: From Conventional to Machine Learning-Based Approaches}, author={Xuefeng Liu, Songhao Jiang, Xiaotian Duan, Archit Vasan, Chong Liu, Chih-chan Tien, Heng Ma, Thomas Brettin, Fangfang Xia, Ian T. Foster, Rick L. Stevens}, journal={arXiv preprint arXiv:2410.00709}, year={2024}, archivePrefix={arXiv}, eprint={2410.00709}, primaryClass={q-bio.QM cs.AI stat.ML} }
liu2024binding
arxiv-664104
2410.00711
BioFace3D: A fully automatic pipeline for facial biomarkers extraction of 3D face reconstructions segmented from MRI
<|reference_start|>BioFace3D: A fully automatic pipeline for facial biomarkers extraction of 3D face reconstructions segmented from MRI: Facial dysmorphologies have emerged as potential critical indicators in the diagnosis and prognosis of genetic, psychotic and rare disorders. While in certain conditions these dysmorphologies are severe, in other cases may be subtle and not perceivable to the human eye, requiring precise quantitative tools for their identification. Manual coding of facial dysmorphologies is a burdensome task and is subject to inter- and intra-observer variability. To overcome this gap, we present BioFace3D as a fully automatic tool for the calculation of facial biomarkers using facial models reconstructed from magnetic resonance images. The tool is divided into three automatic modules for the extraction of 3D facial models from magnetic resonance images, the registration of homologous 3D landmarks encoding facial morphology, and the calculation of facial biomarkers from anatomical landmarks coordinates using geometric morphometrics techniques.<|reference_end|>
arxiv
@article{heredia-lidón2024bioface3d:, title={BioFace3D: A fully automatic pipeline for facial biomarkers extraction of 3D face reconstructions segmented from MRI}, author={'Alvaro Heredia-Lid'on, Luis M. Echeverry-Quiceno, Alejandro Gonz'alez, Noem'i Hostalet, Edith Pomarol-Clotet, Juan Fortea, Mar Fatj'o-Vilas, Neus Mart'inez-Abad'ias, Xavier Sevillano}, journal={arXiv preprint arXiv:2410.00711}, year={2024}, archivePrefix={arXiv}, eprint={2410.00711}, primaryClass={cs.CV q-bio.QM} }
heredia-lidón2024bioface3d:
arxiv-664105
2410.00712
NECOMIMI: Neural-Cognitive Multimodal EEG-informed Image Generation with Diffusion Models
<|reference_start|>NECOMIMI: Neural-Cognitive Multimodal EEG-informed Image Generation with Diffusion Models: NECOMIMI (NEural-COgnitive MultImodal EEG-Informed Image Generation with Diffusion Models) introduces a novel framework for generating images directly from EEG signals using advanced diffusion models. Unlike previous works that focused solely on EEG-image classification through contrastive learning, NECOMIMI extends this task to image generation. The proposed NERV EEG encoder demonstrates state-of-the-art (SoTA) performance across multiple zero-shot classification tasks, including 2-way, 4-way, and 200-way, and achieves top results in our newly proposed Category-based Assessment Table (CAT) Score, which evaluates the quality of EEG-generated images based on semantic concepts. A key discovery of this work is that the model tends to generate abstract or generalized images, such as landscapes, rather than specific objects, highlighting the inherent challenges of translating noisy and low-resolution EEG data into detailed visual outputs. Additionally, we introduce the CAT Score as a new metric tailored for EEG-to-image evaluation and establish a benchmark on the ThingsEEG dataset. This study underscores the potential of EEG-to-image generation while revealing the complexities and challenges that remain in bridging neural activity with visual representation.<|reference_end|>
arxiv
@article{chen2024necomimi:, title={NECOMIMI: Neural-Cognitive Multimodal EEG-informed Image Generation with Diffusion Models}, author={Chi-Sheng Chen}, journal={arXiv preprint arXiv:2410.00712}, year={2024}, archivePrefix={arXiv}, eprint={2410.00712}, primaryClass={q-bio.NC cs.LG} }
chen2024necomimi:
arxiv-664106
2410.00713
RAD: A Dataset and Benchmark for Real-Life Anomaly Detection with Robotic Observations
<|reference_start|>RAD: A Dataset and Benchmark for Real-Life Anomaly Detection with Robotic Observations: Recent advancements in industrial anomaly detection have been hindered by the lack of realistic datasets that accurately represent real-world conditions. Existing algorithms are often developed and evaluated using idealized datasets, which deviate significantly from real-life scenarios characterized by environmental noise and data corruption such as fluctuating lighting conditions, variable object poses, and unstable camera positions. To address this gap, we introduce the Realistic Anomaly Detection (RAD) dataset, the first multi-view RGB-based anomaly detection dataset specifically collected using a real robot arm, providing unique and realistic data scenarios. RAD comprises 4765 images across 13 categories and 4 defect types, collected from more than 50 viewpoints, providing a comprehensive and realistic benchmark. This multi-viewpoint setup mirrors real-world conditions where anomalies may not be detectable from every perspective. Moreover, by sampling varying numbers of views, the algorithm's performance can be comprehensively evaluated across different viewpoints. This approach enhances the thoroughness of performance assessment and helps improve the algorithm's robustness. Besides, to support 3D multi-view reconstruction algorithms, we propose a data augmentation method to improve the accuracy of pose estimation and facilitate the reconstruction of 3D point clouds. We systematically evaluate state-of-the-art RGB-based and point cloud-based models using RAD, identifying limitations and future research directions. The code and dataset could found at https://github.com/kaichen-z/RAD<|reference_end|>
arxiv
@article{zhou2024rad:, title={RAD: A Dataset and Benchmark for Real-Life Anomaly Detection with Robotic Observations}, author={Kaichen Zhou, Yang Cao, Taewhan Kim, Hao Zhao, Hao Dong, Kai Ming Ting, Ye Zhu}, journal={arXiv preprint arXiv:2410.00713}, year={2024}, archivePrefix={arXiv}, eprint={2410.00713}, primaryClass={cs.CV} }
zhou2024rad:
arxiv-664107
2410.00718
Pseudo-Non-Linear Data Augmentation via Energy Minimization
<|reference_start|>Pseudo-Non-Linear Data Augmentation via Energy Minimization: We propose a novel and interpretable data augmentation method based on energy-based modeling and principles from information geometry. Unlike black-box generative models, which rely on deep neural networks, our approach replaces these non-interpretable transformations with explicit, theoretically grounded ones, ensuring interpretability and strong guarantees such as energy minimization. Central to our method is the introduction of the backward projection algorithm, which reverses dimension reduction to generate new data. Empirical results demonstrate that our method achieves competitive performance with black-box generative models while offering greater transparency and interpretability.<|reference_end|>
arxiv
@article{hu2024pseudo-non-linear, title={Pseudo-Non-Linear Data Augmentation via Energy Minimization}, author={Pingbang Hu, Mahito Sugiyama}, journal={arXiv preprint arXiv:2410.00718}, year={2024}, archivePrefix={arXiv}, eprint={2410.00718}, primaryClass={cs.LG} }
hu2024pseudo-non-linear
arxiv-664108
2410.00722
On the Geometry and Optimization of Polynomial Convolutional Networks
<|reference_start|>On the Geometry and Optimization of Polynomial Convolutional Networks: We study convolutional neural networks with monomial activation functions. Specifically, we prove that their parameterization map is regular and is an isomorphism almost everywhere, up to rescaling the filters. By leveraging on tools from algebraic geometry, we explore the geometric properties of the image in function space of this map -- typically referred to as neuromanifold. In particular, we compute the dimension and the degree of the neuromanifold, which measure the expressivity of the model, and describe its singularities. Moreover, for a generic large dataset, we derive an explicit formula that quantifies the number of critical points arising in the optimization of a regression loss.<|reference_end|>
arxiv
@article{shahverdi2024on, title={On the Geometry and Optimization of Polynomial Convolutional Networks}, author={Vahid Shahverdi, Giovanni Luca Marchetti, Kathl'en Kohn}, journal={arXiv preprint arXiv:2410.00722}, year={2024}, archivePrefix={arXiv}, eprint={2410.00722}, primaryClass={cs.LG math.AG} }
shahverdi2024on
arxiv-664109
2410.00724
Discriminative community detection for multiplex networks
<|reference_start|>Discriminative community detection for multiplex networks: Multiplex networks have emerged as a promising approach for modeling complex systems, where each layer represents a different mode of interaction among entities of the same type. A core task in analyzing these networks is to identify the community structure for a better understanding of the overall functioning of the network. While different methods have been proposed to detect the community structure of multiplex networks, the majority deal with extracting the consensus community structure across layers. In this paper, we address the community detection problem across two closely related multiplex networks. For example in neuroimaging studies, it is common to have multiple multiplex brain networks where each layer corresponds to an individual and each group to different experimental conditions. In this setting, one may be interested in both learning the community structure representing each experimental condition and the discriminative community structure between two groups. In this paper, we introduce two discriminative community detection algorithms based on spectral clustering. The first approach aims to identify the discriminative subgraph structure between the groups, while the second one learns the discriminative and the consensus community structures, simultaneously. The proposed approaches are evaluated on both simulated and real world multiplex networks.<|reference_end|>
arxiv
@article{ortiz-bouza2024discriminative, title={Discriminative community detection for multiplex networks}, author={Meiby Ortiz-Bouza and Selin Aviyente}, journal={2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP)}, year={2024}, doi={10.1109/MLSP58920.2024.10734717}, archivePrefix={arXiv}, eprint={2410.00724}, primaryClass={cs.SI cs.LG} }
ortiz-bouza2024discriminative
arxiv-664110
2410.00725
Early Career Citations Capture Judicial Idiosyncrasies and Predict Judgments
<|reference_start|>Early Career Citations Capture Judicial Idiosyncrasies and Predict Judgments: Judicial impartiality is a cornerstone of well-functioning legal systems. We assemble a dataset of 112,312 civil lawsuits in U.S. District Courts to study the effect of extraneous factors on judicial decision making. We show that cases are randomly assigned to judges and that biographical judge features are predictive of judicial decisions. We use low-dimensional representations of judges' early-career citation records as generic representations of judicial idiosyncrasies. These predict future judgments with accuracies exceeding 65% for high-confidence predictions on balanced out-of-sample test cases. For 6-8% of judges, these representations are significant predictors across all judgments. These findings indicate that a small but significant group of judges routinely relies on extraneous factors and careful vetting of judges prior to appointment may partially address this issue. Our use of low-dimensional representations of citation records may also be generalized to other jurisdictions or to study other aspects of judicial decision making.<|reference_end|>
arxiv
@article{mahari2024early, title={Early Career Citations Capture Judicial Idiosyncrasies and Predict Judgments}, author={Robert Mahari, Sandro Claudio Lera}, journal={arXiv preprint arXiv:2410.00725}, year={2024}, archivePrefix={arXiv}, eprint={2410.00725}, primaryClass={cs.SI} }
mahari2024early
arxiv-664111
2410.00726
LTLf Synthesis on First-Order Action Theories
<|reference_start|>LTLf Synthesis on First-Order Action Theories: Golog is an expressive high-level agent language that includes nondeterministic operators which allow to leave some of the decisions to be made only at execution time. This so-called program realization is typically implemented by means of search, or in an incremental online fashion. In this paper, we consider the more realistic case where parts of the non-determinism are under the control of the environment. Program realization then becomes a synthesis problem, where a successful realization executes the program and satisfies the temporal goal for all possible environment actions. We consider Golog programs in combination with an expressive class of first-order action theories that allow for an unbounded number of objects and non-local effects, together with a temporal goal specified in a first-order extension of LTLf. We solve the synthesis problem by constructing a game arena that captures all possible executions of the program while tracking the satisfaction of the temporal goal and then solving the resulting two-player game. We evaluate the approach in two domains, showing the general feasibility of the approach.<|reference_end|>
arxiv
@article{hofmann2024ltlf, title={LTLf Synthesis on First-Order Action Theories}, author={Till Hofmann, Jens Cla{ss}en}, journal={arXiv preprint arXiv:2410.00726}, year={2024}, archivePrefix={arXiv}, eprint={2410.00726}, primaryClass={cs.AI cs.LO} }
hofmann2024ltlf
arxiv-664112
2410.00727
Show Me What's Wrong!: Combining Charts and Text to Guide Data Analysis
<|reference_start|>Show Me What's Wrong!: Combining Charts and Text to Guide Data Analysis: Analyzing and finding anomalies in multi-dimensional datasets is a cumbersome but vital task across different domains. In the context of financial fraud detection, analysts must quickly identify suspicious activity among transactional data. This is an iterative process made of complex exploratory tasks such as recognizing patterns, grouping, and comparing. To mitigate the information overload inherent to these steps, we present a tool combining automated information highlights, Large Language Model generated textual insights, and visual analytics, facilitating exploration at different levels of detail. We perform a segmentation of the data per analysis area and visually represent each one, making use of automated visual cues to signal which require more attention. Upon user selection of an area, our system provides textual and graphical summaries. The text, acting as a link between the high-level and detailed views of the chosen segment, allows for a quick understanding of relevant details. A thorough exploration of the data comprising the selection can be done through graphical representations. The feedback gathered in a study performed with seven domain experts suggests our tool effectively supports and guides exploratory analysis, easing the identification of suspicious information.<|reference_end|>
arxiv
@article{feliciano2024"show, title={"Show Me What's Wrong!": Combining Charts and Text to Guide Data Analysis}, author={Beatriz Feliciano, Rita Costa, Jean Alves, Javier Li'ebana, Diogo Duarte, Pedro Bizarro}, journal={arXiv preprint arXiv:2410.00727}, year={2024}, archivePrefix={arXiv}, eprint={2410.00727}, primaryClass={cs.LG cs.CL cs.HC} }
feliciano2024"show
arxiv-664113
2410.00728
Simplified priors for Object-Centric Learning
<|reference_start|>Simplified priors for Object-Centric Learning: Humans excel at abstracting data and constructing \emph{reusable} concepts, a capability lacking in current continual learning systems. The field of object-centric learning addresses this by developing abstract representations, or slots, from data without human supervision. Different methods have been proposed to tackle this task for images, whereas most are overly complex, non-differentiable, or poorly scalable. In this paper, we introduce a conceptually simple, fully-differentiable, non-iterative, and scalable method called SAMP Simplified Slot Attention with Max Pool Priors). It is implementable using only Convolution and MaxPool layers and an Attention layer. Our method encodes the input image with a Convolutional Neural Network and then uses a branch of alternating Convolution and MaxPool layers to create specialized sub-networks and extract primitive slots. These primitive slots are then used as queries for a Simplified Slot Attention over the encoded image. Despite its simplicity, our method is competitive or outperforms previous methods on standard benchmarks.<|reference_end|>
arxiv
@article{patil2024simplified, title={Simplified priors for Object-Centric Learning}, author={Vihang Patil, Andreas Radler, Daniel Klotz, Sepp Hochreiter}, journal={arXiv preprint arXiv:2410.00728}, year={2024}, archivePrefix={arXiv}, eprint={2410.00728}, primaryClass={cs.CV cs.LG} }
patil2024simplified
arxiv-664114
2410.00731
Improved Generation of Synthetic Imaging Data Using Feature-Aligned Diffusion
<|reference_start|>Improved Generation of Synthetic Imaging Data Using Feature-Aligned Diffusion: Synthetic data generation is an important application of machine learning in the field of medical imaging. While existing approaches have successfully applied fine-tuned diffusion models for synthesizing medical images, we explore potential improvements to this pipeline through feature-aligned diffusion. Our approach aligns intermediate features of the diffusion model to the output features of an expert, and our preliminary findings show an improvement of 9% in generation accuracy and ~0.12 in SSIM diversity. Our approach is also synergistic with existing methods, and easily integrated into diffusion training pipelines for improvements. We make our code available at \url{https://github.com/lnairGT/Feature-Aligned-Diffusion}.<|reference_end|>
arxiv
@article{nair2024improved, title={Improved Generation of Synthetic Imaging Data Using Feature-Aligned Diffusion}, author={Lakshmi Nair}, journal={arXiv preprint arXiv:2410.00731}, year={2024}, doi={10.1145/3689096.3689460}, archivePrefix={arXiv}, eprint={2410.00731}, primaryClass={cs.CV cs.LG} }
nair2024improved
arxiv-664115
2410.00736
Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics
<|reference_start|>Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics: Mobile robots require accurate and robust depth measurements to understand and interact with the environment. While existing sensing modalities address this problem to some extent, recent research on monocular depth estimation has leveraged the information richness, yet low cost and simplicity of monocular cameras. These works have shown significant generalization capabilities, mainly in automotive and indoor settings. However, robots often operate in environments with limited scale cues, self-similar appearances, and low texture. In this work, we encode measurements from a low-cost mmWave radar into the input space of a state-of-the-art monocular depth estimation model. Despite the radar's extreme point cloud sparsity, our method demonstrates generalization and robustness across industrial and outdoor experiments. Our approach reduces the absolute relative error of depth predictions by 9-64% across a range of unseen, real-world validation datasets. Importantly, we maintain consistency of all performance metrics across all experiments and scene depths where current vision-only approaches fail. We further address the present deficit of training data in mobile robotics environments by introducing a novel methodology for synthesizing rendered, realistic learning datasets based on photogrammetric data that simulate the radar sensor observations for training. Our code, datasets, and pre-trained networks are made available at https://github.com/ethz-asl/radarmeetsvision.<|reference_end|>
arxiv
@article{job2024radar, title={Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics}, author={Marco Job, Thomas Stastny, Tim Kazik, Roland Siegwart, Michael Pantic}, journal={arXiv preprint arXiv:2410.00736}, year={2024}, archivePrefix={arXiv}, eprint={2410.00736}, primaryClass={cs.RO} }
job2024radar
arxiv-664116
2410.00737
Design and In-training Optimization of Binary Search ADC for Flexible Classifiers
<|reference_start|>Design and In-training Optimization of Binary Search ADC for Flexible Classifiers: Flexible Electronics (FE) offer distinct advantages, including mechanical flexibility and low process temperatures, enabling extremely low-cost production. To address the demands of applications such as smart sensors and wearables, flexible devices must be small and operate at low supply voltages. Additionally, target applications often require classifiers to operate directly on analog sensory input, necessitating the use of Analog to Digital Converters (ADCs) to process the sensory data. However, ADCs present serious challenges, particularly in terms of high area and power consumption, especially when considering stringent area and energy budget. In this work, we target common classifiers in this domain such as MLPs and SVMs and present a holistic approach to mitigate the elevated overhead of analog to digital interfacing in FE. First, we propose a novel design for Binary Search ADC that reduces area overhead 2X compared with the state-of-the-art Binary design and up to 5.4X compared with Flash ADC. Next, we present an in-training ADC optimization in which we keep the bare-minimum representations required and simplifying ADCs by removing unnecessary components. Our in-training optimization further reduces on average the area in terms of transistor count of the required ADCs by 5X for less than 1% accuracy loss.<|reference_end|>
arxiv
@article{duarte2024design, title={Design and In-training Optimization of Binary Search ADC for Flexible Classifiers}, author={Paula Carolina Lozano Duarte, Florentia Afentaki, Georgios Zervakis, Mehdi B. Tahoori}, journal={arXiv preprint arXiv:2410.00737}, year={2024}, doi={10.1145/3658617.3697715}, archivePrefix={arXiv}, eprint={2410.00737}, primaryClass={cs.AR eess.SP} }
duarte2024design
arxiv-664117
2410.00741
VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models
<|reference_start|>VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models: Contrastive Language-Image Pre-training (CLIP) has been widely studied and applied in numerous applications. However, the emphasis on brief summary texts during pre-training prevents CLIP from understanding long descriptions. This issue is particularly acute regarding videos given that videos often contain abundant detailed contents. In this paper, we propose the VideoCLIP-XL (eXtra Length) model, which aims to unleash the long-description understanding capability of video CLIP models. Firstly, we establish an automatic data collection system and gather a large-scale VILD pre-training dataset with VIdeo and Long-Description pairs. Then, we propose Text-similarity-guided Primary Component Matching (TPCM) to better learn the distribution of feature space while expanding the long description capability. We also introduce two new tasks namely Detail-aware Description Ranking (DDR) and Hallucination-aware Description Ranking (HDR) for further understanding improvement. Finally, we construct a Long Video Description Ranking (LVDR) benchmark for evaluating the long-description capability more comprehensively. Extensive experimental results on widely-used text-video retrieval benchmarks with both short and long descriptions and our LVDR benchmark can fully demonstrate the effectiveness of our method.<|reference_end|>
arxiv
@article{wang2024videoclip-xl:, title={VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models}, author={Jiapeng Wang, Chengyu Wang, Kunzhe Huang, Jun Huang, Lianwen Jin}, journal={arXiv preprint arXiv:2410.00741}, year={2024}, archivePrefix={arXiv}, eprint={2410.00741}, primaryClass={cs.CL cs.CV cs.MM} }
wang2024videoclip-xl:
arxiv-664118
2410.00742
Representation of Classical Data on Quantum Computers
<|reference_start|>Representation of Classical Data on Quantum Computers: Quantum computing is currently gaining significant attention, not only from the academic community but also from industry, due to its potential applications across several fields for addressing complex problems. For any practical problem which may be tackled using quantum computing, it is imperative to represent the data used onto a quantum computing system. Depending on the application, many different types of data and data structures occur, including regular numbers, higher-dimensional data structures, e.g., n-dimensional images, up to graphs. This report aims to provide an overview of existing methods for representing these data types on gate-based quantum computers.<|reference_end|>
arxiv
@article{lang2024representation, title={Representation of Classical Data on Quantum Computers}, author={Thomas Lang, Anja Heim, Kilian Dremel, Dimitri Prjamkov, Martin Blaimer, Markus Firsching, Anastasia Papadaki, Stefan Kasperl, Theobald OJ Fuchs}, journal={arXiv preprint arXiv:2410.00742}, year={2024}, archivePrefix={arXiv}, eprint={2410.00742}, primaryClass={quant-ph cs.DS} }
lang2024representation
arxiv-664119
2410.00746
WALINET: A water and lipid identification convolutional Neural Network for nuisance signal removal in 1H MR Spectroscopic Imaging
<|reference_start|>WALINET: A water and lipid identification convolutional Neural Network for nuisance signal removal in 1H MR Spectroscopic Imaging: Purpose. Proton Magnetic Resonance Spectroscopic Imaging (1H-MRSI) provides non-invasive spectral-spatial mapping of metabolism. However, long-standing problems in whole-brain 1H-MRSI are spectral overlap of metabolite peaks with large lipid signal from scalp, and overwhelming water signal that distorts spectra. Fast and effective methods are needed for high-resolution 1H-MRSI to accurately remove lipid and water signals while preserving the metabolite signal. The potential of supervised neural networks for this task remains unexplored, despite their success for other MRSI processing. Methods. We introduce a deep-learning method based on a modified Y-NET network for water and lipid removal in whole-brain 1H-MRSI. The WALINET (WAter and LIpid neural NETwork) was compared to conventional methods such as the state-of-the-art lipid L2 regularization and Hankel-Lanczos singular value decomposition (HLSVD) water suppression. Methods were evaluated on simulated and in-vivo whole-brain MRSI using NMRSE, SNR, CRLB, and FWHM metrics. Results. WALINET is significantly faster and needs 8s for high-resolution whole-brain MRSI, compared to 42 minutes for conventional HLSVD+L2. Quantitative analysis shows WALINET has better performance than HLSVD+L2: 1) more lipid removal with 41% lower NRMSE, 2) better metabolite signal preservation with 71% lower NRMSE in simulated data, 155% higher SNR and 50% lower CRLB in in-vivo data. Metabolic maps obtained by WALINET in healthy subjects and patients show better gray/white-matter contrast with more visible structural details. Conclusions. WALINET has superior performance for nuisance signal removal and metabolite quantification on whole-brain 1H-MRSI compared to conventional state-of-the-art techniques. This represents a new application of deep-learning for MRSI processing, with potential for automated high-throughput workflow.<|reference_end|>
arxiv
@article{weiser2024walinet:, title={WALINET: A water and lipid identification convolutional Neural Network for nuisance signal removal in 1H MR Spectroscopic Imaging}, author={Paul Weiser, Georg Langs, Stanislav Motyka, Wolfgang Bogner, S'ebastien Courvoisier, Malte Hoffmann, Antoine Klauser, Ovidiu C. Andronesi}, journal={arXiv preprint arXiv:2410.00746}, year={2024}, archivePrefix={arXiv}, eprint={2410.00746}, primaryClass={eess.IV cs.CV cs.LG} }
weiser2024walinet:
arxiv-664120
2410.00749
Optimizing Token Usage on Large Language Model Conversations Using the Design Structure Matrix
<|reference_start|>Optimizing Token Usage on Large Language Model Conversations Using the Design Structure Matrix: As Large Language Models become ubiquitous in many sectors and tasks, there is a need to reduce token usage, overcoming challenges such as short context windows, limited output sizes, and costs associated with token intake and generation, especially in API-served LLMs. This work brings the Design Structure Matrix from the engineering design discipline into LLM conversation optimization. Applied to a use case in which the LLM conversation is about the design of a spacecraft and its subsystems, the DSM, with its analysis tools such as clustering and sequencing, demonstrates being an effective tool to organize the conversation, minimizing the number of tokens sent to or retrieved from the LLM at once, as well as grouping chunks that can be allocated to different context windows. Hence, this work broadens the current set of methodologies for token usage optimization and opens new avenues for the integration of engineering design practices into LLMs.<|reference_end|>
arxiv
@article{alarcia2024optimizing, title={Optimizing Token Usage on Large Language Model Conversations Using the Design Structure Matrix}, author={Ramon Maria Garcia Alarcia and Alessandro Golkar}, journal={DS 134: Proceedings of the 26th International DSM Conference (DSM 2024), Stuttgart, Germany}, year={2024}, doi={10.35199/dsm2024.08}, number={DS 134}, archivePrefix={arXiv}, eprint={2410.00749}, primaryClass={cs.CL} }
alarcia2024optimizing
arxiv-664121
2410.00751
Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting
<|reference_start|>Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting: The field of privacy-preserving Natural Language Processing has risen in popularity, particularly at a time when concerns about privacy grow with the proliferation of Large Language Models. One solution consistently appearing in recent literature has been the integration of Differential Privacy (DP) into NLP techniques. In this paper, we take these approaches into critical view, discussing the restrictions that DP integration imposes, as well as bring to light the challenges that such restrictions entail. To accomplish this, we focus on $\textbf{DP-Prompt}$, a recent method for text privatization leveraging language models to rewrite texts. In particular, we explore this rewriting task in multiple scenarios, both with DP and without DP. To drive the discussion on the merits of DP in NLP, we conduct empirical utility and privacy experiments. Our results demonstrate the need for more discussion on the usability of DP in NLP and its benefits over non-DP approaches.<|reference_end|>
arxiv
@article{meisenbacher2024thinking, title={Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting}, author={Stephen Meisenbacher and Florian Matthes}, journal={arXiv preprint arXiv:2410.00751}, year={2024}, archivePrefix={arXiv}, eprint={2410.00751}, primaryClass={cs.CL} }
meisenbacher2024thinking
arxiv-664122
2410.00752
TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark
<|reference_start|>TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark: Code generation models can help improve many common software tasks ranging from code completion to defect prediction. Most of the existing benchmarks for code generation LLMs focus on code authoring or code completion. Surprisingly, there has been far less effort dedicated to benchmarking software testing, despite the strong correlation between well-tested software and effective bug detection. To address this gap, we create and release TestGenEval, a large-scale benchmark to measure test generation performance. Based on SWEBench, TestGenEval comprises 68,647 tests from 1,210 code and test file pairs across 11 well-maintained Python repositories. It covers initial tests authoring, test suite completion, and code coverage improvements. Test authoring simulates the process of a developer writing a test suite from scratch, while test completion mimics the scenario where a developer aims to improve the coverage of an existing test suite. We evaluate several popular models, with sizes ranging from 7B to 405B parameters. Our detailed analysis highlights TestGenEval's contribution to a comprehensive evaluation of test generation performance. In particular, models struggle to generate high-coverage test suites, with the best model, GPT-4o, achieving an average coverage of only 35.2%. This is primarily due to models struggling to reason about execution, and their frequent assertion errors when addressing complex code paths.<|reference_end|>
arxiv
@article{jain2024testgeneval:, title={TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark}, author={Kush Jain, Gabriel Synnaeve, Baptiste Rozi`ere}, journal={arXiv preprint arXiv:2410.00752}, year={2024}, archivePrefix={arXiv}, eprint={2410.00752}, primaryClass={cs.SE} }
jain2024testgeneval:
arxiv-664123
2410.00753
Optimizing Drug Delivery in Smart Pharmacies: A Novel Framework of Multi-Stage Grasping Network Combined with Adaptive Robotics Mechanism
<|reference_start|>Optimizing Drug Delivery in Smart Pharmacies: A Novel Framework of Multi-Stage Grasping Network Combined with Adaptive Robotics Mechanism: Robots-based smart pharmacies are essential for modern healthcare systems, enabling efficient drug delivery. However, a critical challenge exists in the robotic handling of drugs with varying shapes and overlapping positions, which previous studies have not adequately addressed. To enhance the robotic arm's ability to grasp chaotic, overlapping, and variously shaped drugs, this paper proposed a novel framework combining a multi-stage grasping network with an adaptive robotics mechanism. The framework first preprocessed images using an improved Super-Resolution Convolutional Neural Network (SRCNN) algorithm, and then employed the proposed YOLOv5+E-A-SPPFCSPC+BIFPNC (YOLO-EASB) instance segmentation algorithm for precise drug segmentation. The most suitable drugs for grasping can be determined by assessing the completeness of the segmentation masks. Then, these segmented drugs were processed by our improved Adaptive Feature Fusion and Grasp-Aware Network (IAFFGA-Net) with the optimized loss function, which ensures accurate picking actions even in complex environments. To control the robot grasping, a time-optimal robotic arm trajectory planning algorithm that combines an improved ant colony algorithm with 3-5-3 interpolation was developed, further improving efficiency while ensuring smooth trajectories. Finally, this system was implemented and validated within an adaptive collaborative robot setup, which dynamically adjusts to different production environments and task requirements. Experimental results demonstrate the superiority of our multi-stage grasping network in optimizing smart pharmacy operations, while also showcasing its remarkable adaptability and effectiveness in practical applications.<|reference_end|>
arxiv
@article{tang2024optimizing, title={Optimizing Drug Delivery in Smart Pharmacies: A Novel Framework of Multi-Stage Grasping Network Combined with Adaptive Robotics Mechanism}, author={Rui Tang and Shirong Guo and Yuhang Qiu and Honghui Chen and Lujin Huang and Ming Yong and Linfu Zhou and Liquan Guo}, journal={arXiv preprint arXiv:2410.00753}, year={2024}, archivePrefix={arXiv}, eprint={2410.00753}, primaryClass={cs.RO cs.CV} }
tang2024optimizing
arxiv-664124
2410.00757
Collaborative motion planning for multi-manipulator systems through Reinforcement Learning and Dynamic Movement Primitives
<|reference_start|>Collaborative motion planning for multi-manipulator systems through Reinforcement Learning and Dynamic Movement Primitives: Robotic tasks often require multiple manipulators to enhance task efficiency and speed, but this increases complexity in terms of collaboration, collision avoidance, and the expanded state-action space. To address these challenges, we propose a multi-level approach combining Reinforcement Learning (RL) and Dynamic Movement Primitives (DMP) to generate adaptive, real-time trajectories for new tasks in dynamic environments using a demonstration library. This method ensures collision-free trajectory generation and efficient collaborative motion planning. We validate the approach through experiments in the PyBullet simulation environment with UR5e robotic manipulators.<|reference_end|>
arxiv
@article{singh2024collaborative, title={Collaborative motion planning for multi-manipulator systems through Reinforcement Learning and Dynamic Movement Primitives}, author={Siddharth Singh, Tian Xu and Qing Chang}, journal={arXiv preprint arXiv:2410.00757}, year={2024}, archivePrefix={arXiv}, eprint={2410.00757}, primaryClass={cs.RO} }
singh2024collaborative
arxiv-664125
2410.00758
Under Pressure: Altimeter-Aided ICP for 3D Maps Consistency
<|reference_start|>Under Pressure: Altimeter-Aided ICP for 3D Maps Consistency: We propose a novel method to enhance the accuracy of the Iterative Closest Point (ICP) algorithm by integrating altitude constraints from a barometric pressure sensor. While ICP is widely used in mobile robotics for Simultaneous Localization and Mapping ( SLAM ), it is susceptible to drift, especially in underconstrained environments such as vertical shafts. To address this issue, we propose to augment ICP with altimeter measurements, reliably constraining drifts along the gravity vector. To demonstrate the potential of altimetry in SLAM , we offer an analysis of calibration procedures and noise sensitivity of various pressure sensors, improving measurements to centimeter-level accuracy. Leveraging this accuracy, we propose a novel ICP formulation that integrates altitude measurements along the gravity vector, thus simplifying the optimization problem to 3-Degree Of Freedom (DOF). Experimental results from real-world deployments demonstrate that our method reduces vertical drift by 84% and improves overall localization accuracy compared to state-of-the-art methods in non-planar environments.<|reference_end|>
arxiv
@article{dubois2024under, title={Under Pressure: Altimeter-Aided ICP for 3D Maps Consistency}, author={William Dubois, Nicolas Samson, Effie Daum, Johann Laconte, Franc{c}ois Pomerleau}, journal={arXiv preprint arXiv:2410.00758}, year={2024}, archivePrefix={arXiv}, eprint={2410.00758}, primaryClass={cs.RO} }
dubois2024under
arxiv-664126
2410.00759
Targeted synthetic data generation for tabular data via hardness characterization
<|reference_start|>Targeted synthetic data generation for tabular data via hardness characterization: Synthetic data generation has been proven successful in improving model performance and robustness in the context of scarce or low-quality data. Using the data valuation framework to statistically identify beneficial and detrimental observations, we introduce a novel augmentation pipeline that generates only high-value training points based on hardness characterization. We first demonstrate via benchmarks on real data that Shapley-based data valuation methods perform comparably with learning-based methods in hardness characterisation tasks, while offering significant theoretical and computational advantages. Then, we show that synthetic data generators trained on the hardest points outperform non-targeted data augmentation on simulated data and on a large scale credit default prediction task. In particular, our approach improves the quality of out-of-sample predictions and it is computationally more efficient compared to non-targeted methods.<|reference_end|>
arxiv
@article{ferracci2024targeted, title={Targeted synthetic data generation for tabular data via hardness characterization}, author={Tommaso Ferracci, Leonie Tabea Goldmann, Anton Hinel, Francesco Sanna Passino}, journal={arXiv preprint arXiv:2410.00759}, year={2024}, archivePrefix={arXiv}, eprint={2410.00759}, primaryClass={cs.LG stat.ML} }
ferracci2024targeted
arxiv-664127
2410.00767
Zero-Shot Text-to-Speech from Continuous Text Streams
<|reference_start|>Zero-Shot Text-to-Speech from Continuous Text Streams: Existing zero-shot text-to-speech (TTS) systems are typically designed to process complete sentences and are constrained by the maximum duration for which they have been trained. However, in many streaming applications, texts arrive continuously in short chunks, necessitating instant responses from the system. We identify the essential capabilities required for chunk-level streaming and introduce LiveSpeech 2, a stream-aware model that supports infinitely long speech generation, text-audio stream synchronization, and seamless transitions between short speech chunks. To achieve these, we propose (1) adopting Mamba, a class of sequence modeling distinguished by linear-time decoding, which is augmented by cross-attention mechanisms for conditioning, (2) utilizing rotary positional embeddings in the computation of cross-attention, enabling the model to process an infinite text stream by sliding a window, and (3) decoding with semantic guidance, a technique that aligns speech with the transcript during inference with minimal overhead. Experimental results demonstrate that our models are competitive with state-of-the-art language model-based zero-shot TTS models, while also providing flexibility to support a wide range of streaming scenarios.<|reference_end|>
arxiv
@article{dang2024zero-shot, title={Zero-Shot Text-to-Speech from Continuous Text Streams}, author={Trung Dang, David Aponte, Dung Tran, Tianyi Chen, Kazuhito Koishida}, journal={arXiv preprint arXiv:2410.00767}, year={2024}, archivePrefix={arXiv}, eprint={2410.00767}, primaryClass={cs.SD eess.AS} }
dang2024zero-shot
arxiv-664128
2410.00769
DeepAerialMapper: Deep Learning-based Semi-automatic HD Map Creation for Highly Automated Vehicles
<|reference_start|>DeepAerialMapper: Deep Learning-based Semi-automatic HD Map Creation for Highly Automated Vehicles: High-definition maps (HD maps) play a crucial role in the development, safety validation, and operation of highly automated vehicles. Efficiently collecting up-to-date sensor data from road segments and obtaining accurate maps from these are key challenges in HD map creation. Commonly used methods, such as dedicated measurement vehicles and crowd-sourced data from series vehicles, often face limitations in commercial viability. Although high-resolution aerial imagery offers a cost-effective or even free alternative, it requires significant manual effort and time to transform it into maps. In this paper, we introduce a semi-automatic method for creating HD maps from high-resolution aerial imagery. Our method involves training neural networks to semantically segment aerial images into classes relevant to HD maps. The resulting segmentation is then hierarchically post-processed to generate a prototypical HD map of visible road elements. Exporting the map to the Lanelet2 format allows easy extension for different use cases using standard tools. To train and evaluate our method, we created a dataset using public aerial imagery of urban road segments in Germany. In our evaluation, we achieved an automatic mapping of lane markings and road borders with a recall and precision exceeding 96%. The source code for our method is publicly available at https://github.com/RobertKrajewski/DeepAerialMapper.<|reference_end|>
arxiv
@article{krajewski2024deepaerialmapper:, title={DeepAerialMapper: Deep Learning-based Semi-automatic HD Map Creation for Highly Automated Vehicles}, author={Robert Krajewski and Huijo Kim}, journal={arXiv preprint arXiv:2410.00769}, year={2024}, archivePrefix={arXiv}, eprint={2410.00769}, primaryClass={cs.CV} }
krajewski2024deepaerialmapper:
arxiv-664129
2410.00771
Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting
<|reference_start|>Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting: In recent years, the rapid increase in online video content has underscored the limitations of static Video Question Answering (VideoQA) models trained on fixed datasets, as they struggle to adapt to new questions or tasks posed by newly available content. In this paper, we explore the novel challenge of VideoQA within a continual learning framework, and empirically identify a critical issue: fine-tuning a large language model (LLM) for a sequence of tasks often results in catastrophic forgetting. To address this, we propose Collaborative Prompting (ColPro), which integrates specific question constraint prompting, knowledge acquisition prompting, and visual temporal awareness prompting. These prompts aim to capture textual question context, visual content, and video temporal dynamics in VideoQA, a perspective underexplored in prior research. Experimental results on the NExT-QA and DramaQA datasets show that ColPro achieves superior performance compared to existing approaches, achieving 55.14\% accuracy on NExT-QA and 71.24\% accuracy on DramaQA, highlighting its practical relevance and effectiveness.<|reference_end|>
arxiv
@article{cai2024empowering, title={Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting}, author={Chen Cai, Zheng Wang, Jianjun Gao, Wenyang Liu, Ye Lu, Runzhong Zhang, Kim-Hui Yap}, journal={arXiv preprint arXiv:2410.00771}, year={2024}, archivePrefix={arXiv}, eprint={2410.00771}, primaryClass={cs.CV cs.CL} }
cai2024empowering
arxiv-664130
2410.00772
On the Generalization and Causal Explanation in Self-Supervised Learning
<|reference_start|>On the Generalization and Causal Explanation in Self-Supervised Learning: Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new tasks. To investigate this phenomenon, we conduct experiments on various SSL methods and datasets and make two observations: (1) Overfitting occurs abruptly in later layers and epochs, while generalizing features are learned in early layers for all epochs; (2) Coding rate reduction can be used as an indicator to measure the degree of overfitting in SSL models. Based on these observations, we propose Undoing Memorization Mechanism (UMM), a plug-and-play method that mitigates overfitting of the pre-trained feature extractor by aligning the feature distributions of the early and the last layers to maximize the coding rate reduction of the last layer output. The learning process of UMM is a bi-level optimization process. We provide a causal analysis of UMM to explain how UMM can help the pre-trained feature extractor overcome overfitting and recover generalization. We also demonstrate that UMM significantly improves the generalization performance of SSL methods on various downstream tasks.<|reference_end|>
arxiv
@article{qiang2024on, title={On the Generalization and Causal Explanation in Self-Supervised Learning}, author={Wenwen Qiang, Zeen Song, Ziyin Gu, Jiangmeng Li, Changwen Zheng, Fuchun Sun, Hui Xiong}, journal={arXiv preprint arXiv:2410.00772}, year={2024}, archivePrefix={arXiv}, eprint={2410.00772}, primaryClass={cs.CV cs.LG} }
qiang2024on
arxiv-664131
2410.00773
BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
<|reference_start|>BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data: Large language models (LLMs) have become increasingly pivotal across various domains, especially in handling complex data types. This includes structured data processing, as exemplified by ChartQA and ChatGPT-Ada, and multimodal unstructured data processing as seen in Visual Question Answering (VQA). These areas have attracted significant attention from both industry and academia. Despite this, there remains a lack of unified evaluation methodologies for these diverse data handling scenarios. In response, we introduce BabelBench, an innovative benchmark framework that evaluates the proficiency of LLMs in managing multimodal multistructured data with code execution. BabelBench incorporates a dataset comprising 247 meticulously curated problems that challenge the models with tasks in perception, commonsense reasoning, logical reasoning, and so on. Besides the basic capabilities of multimodal understanding, structured data processing as well as code generation, these tasks demand advanced capabilities in exploration, planning, reasoning and debugging. Our experimental findings on BabelBench indicate that even cutting-edge models like ChatGPT 4 exhibit substantial room for improvement. The insights derived from our comprehensive analysis offer valuable guidance for future research within the community. The benchmark data can be found at https://github.com/FFD8FFE/babelbench.<|reference_end|>
arxiv
@article{wang2024babelbench:, title={BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data}, author={Xuwu Wang, Qiwen Cui, Yunzhe Tao, Yiran Wang, Ziwei Chai, Xiaotian Han, Boyi Liu, Jianbo Yuan, Jing Su, Guoyin Wang, Tingkai Liu, Liyu Chen, Tianyi Liu, Tao Sun, Yufeng Zhang, Sirui Zheng, Quanzeng You, Yang Yang, Hongxia Yang}, journal={arXiv preprint arXiv:2410.00773}, year={2024}, archivePrefix={arXiv}, eprint={2410.00773}, primaryClass={cs.AI cs.CL} }
wang2024babelbench:
arxiv-664132
2410.00774
Adaptive Motion Generation Using Uncertainty-Driven Foresight Prediction
<|reference_start|>Adaptive Motion Generation Using Uncertainty-Driven Foresight Prediction: Uncertainty of environments has long been a difficult characteristic to handle, when performing real-world robot tasks. This is because the uncertainty produces unexpected observations that cannot be covered by manual scripting. Learning based robot controlling methods are a promising approach for generating flexible motions against unknown situations, but still tend to suffer under uncertainty due to its deterministic nature. In order to adaptively perform the target task under such conditions, the robot control model must be able to accurately understand the possible uncertainty, and to exploratively derive the optimal action that minimizes such uncertainty. This paper extended an existing predictive learning based robot control method, which employ foresight prediction using dynamic internal simulation. The foresight module refines the model's hidden states by sampling multiple possible futures and replace with the one that led to the lower future uncertainty. The adaptiveness of the model was evaluated on a door opening task. The door can be opened either by pushing, pulling, or sliding, but robot cannot visually distinguish which way, and is required to adapt on the fly. The results showed that the proposed model adaptively diverged its motion through interaction with the door, whereas conventional methods failed to stably diverge. The models were analyzed on Lyapunov exponents of RNN hidden states which reflect the possible divergence at each time step during task execution. The result indicated that the foresight module biased the model to consider future consequences, which lead to embedding uncertainties at the policy of the robot controller, rather than the resultant observation. This is beneficial for implementing adaptive behaviors, which indices derivation of diverse motion during exploration.<|reference_end|>
arxiv
@article{hiruma2024adaptive, title={Adaptive Motion Generation Using Uncertainty-Driven Foresight Prediction}, author={Hyogo Hiruma, Hiroshi Ito, and Tetusya Ogata}, journal={arXiv preprint arXiv:2410.00774}, year={2024}, archivePrefix={arXiv}, eprint={2410.00774}, primaryClass={cs.RO cs.AI cs.LG} }
hiruma2024adaptive
arxiv-664133
2410.00775
Decoding Hate: Exploring Language Models' Reactions to Hate Speech
<|reference_start|>Decoding Hate: Exploring Language Models' Reactions to Hate Speech: Hate speech is a harmful form of online expression, often manifesting as derogatory posts. It is a significant risk in digital environments. With the rise of Large Language Models (LLMs), there is concern about their potential to replicate hate speech patterns, given their training on vast amounts of unmoderated internet data. Understanding how LLMs respond to hate speech is crucial for their responsible deployment. However, the behaviour of LLMs towards hate speech has been limited compared. This paper investigates the reactions of seven state-of-the-art LLMs (LLaMA 2, Vicuna, LLaMA 3, Mistral, GPT-3.5, GPT-4, and Gemini Pro) to hate speech. Through qualitative analysis, we aim to reveal the spectrum of responses these models produce, highlighting their capacity to handle hate speech inputs. We also discuss strategies to mitigate hate speech generation by LLMs, particularly through fine-tuning and guideline guardrailing. Finally, we explore the models' responses to hate speech framed in politically correct language.<|reference_end|>
arxiv
@article{piot2024decoding, title={Decoding Hate: Exploring Language Models' Reactions to Hate Speech}, author={Paloma Piot, Javier Parapar}, journal={arXiv preprint arXiv:2410.00775}, year={2024}, archivePrefix={arXiv}, eprint={2410.00775}, primaryClass={cs.CL} }
piot2024decoding
arxiv-664134
2410.00778
Google, How Should I Vote? How Users Formulate Search Queries to Find Political Information on Search Engines
<|reference_start|>Google, How Should I Vote? How Users Formulate Search Queries to Find Political Information on Search Engines: Search engine results depend not only on the algorithms but also on how users interact with them. However, factors affecting the selection of a search query remain understudied. Using a representative survey of Swiss citizens before a round of federal popular votes, this study examines how users formulate search queries related to the retirement policies that were voted on in March 2024. Contrary to existing research, we find no direct evidence of selective exposure, or users' tendency to search for pro-attitudinal information, which we explain by the less polarizing search topics. However, we find that the sentiment of the query is partially aligned with the expected vote outcome. Our results also suggest that undecided and non-voters are more likely to search for nuanced information, such as consequences and interpretations of the policies. The perceived importance and effect of the issue, political views, and sociodemographics also affect query formulation.<|reference_end|>
arxiv
@article{vziatysheva2024google,, title={Google, How Should I Vote? How Users Formulate Search Queries to Find Political Information on Search Engines}, author={Victoria Vziatysheva, Mykola Makhortykh, Maryna Sydorova, Vihang Jumle}, journal={arXiv preprint arXiv:2410.00778}, year={2024}, archivePrefix={arXiv}, eprint={2410.00778}, primaryClass={cs.HC} }
vziatysheva2024google,
arxiv-664135
2410.00779
Local-to-Global Self-Supervised Representation Learning for Diabetic Retinopathy Grading
<|reference_start|>Local-to-Global Self-Supervised Representation Learning for Diabetic Retinopathy Grading: Artificial intelligence algorithms have demonstrated their image classification and segmentation ability in the past decade. However, artificial intelligence algorithms perform less for actual clinical data than those used for simulations. This research aims to present a novel hybrid learning model using self-supervised learning and knowledge distillation, which can achieve sufficient generalization and robustness. The self-attention mechanism and tokens employed in ViT, besides the local-to-global learning approach used in the hybrid model, enable the proposed algorithm to extract a high-dimensional and high-quality feature space from images. To demonstrate the proposed neural network's capability in classifying and extracting feature spaces from medical images, we use it on a dataset of Diabetic Retinopathy images, specifically the EyePACS dataset. This dataset is more complex structurally and challenging regarding damaged areas than other medical images. For the first time in this study, self-supervised learning and knowledge distillation are used to classify this dataset. In our algorithm, for the first time among all self-supervised learning and knowledge distillation models, the test dataset is 50% larger than the training dataset. Unlike many studies, we have not removed any images from the dataset. Finally, our algorithm achieved an accuracy of 79.1% in the linear classifier and 74.36% in the k-NN algorithm for multiclass classification. Compared to a similar state-of-the-art model, our results achieved higher accuracy and more effective representation spaces.<|reference_end|>
arxiv
@article{hajighasemloua2024local-to-global, title={Local-to-Global Self-Supervised Representation Learning for Diabetic Retinopathy Grading}, author={Mostafa Hajighasemloua, Samad Sheikhaei, Hamid Soltanian-Zadeha}, journal={arXiv preprint arXiv:2410.00779}, year={2024}, archivePrefix={arXiv}, eprint={2410.00779}, primaryClass={cs.CV eess.IV} }
hajighasemloua2024local-to-global
arxiv-664136
2410.00788
The Risks of Scientific Gerontocracy
<|reference_start|>The Risks of Scientific Gerontocracy: While much has been written about the problem of information overload in news and social media, little attention has been paid to its consequence in science. Scientific literature, however, has witnessed decades of exponential growth, to the point that the publications of the last twenty years now constitute 60% of all academic literature. This information overload is not without consequence. Our analysis reveals that, unlike other cultural products, scientific publications face unique challenges: the decreasing proportion of papers capturing large shares of researchers' attention and the slow turnover of influential papers lead to a disproportionate prominence of established works, resulting in stagnation and aging of scientific canons. To determine whether scientific hypergrowth is responsible for such ``gerontocratization of science'', we propose a generative model of paper citations based on random discovery and cumulative advantage, with a varying number of new papers each year. Our findings show that, as exponential growth intensifies, gerontocratization appears and becomes increasingly pronounced. Recognizing and understanding this mechanism is hence essential for developing targeted strategies to counteract this trend and promote a balanced and healthy renewal of scientific canons.<|reference_end|>
arxiv
@article{houssard2024the, title={The Risks of Scientific Gerontocracy}, author={Antoine Houssard, Floriana Gargiulo, Gabriele Di Bona, Tommaso Venturini, Paola Tubaro}, journal={arXiv preprint arXiv:2410.00788}, year={2024}, archivePrefix={arXiv}, eprint={2410.00788}, primaryClass={cs.DL physics.soc-ph} }
houssard2024the
arxiv-664137
2410.00792
Fast Multiplication and the PLWE-RLWE Equivalence for an Infinite Family of Cyclotomic Subextensions
<|reference_start|>Fast Multiplication and the PLWE-RLWE Equivalence for an Infinite Family of Cyclotomic Subextensions: We prove the equivalence between the Ring Learning With Errors (RLWE) and the Polynomial Learning With Errors (PLWE) problems for the maximal totally real subfield of the $2^r 3^s$-th cyclotomic field for $r \geq 3$ and $s \geq 1$. Moreover, we describe a fast algorithm for computing the product of two elements in the ring of integers of these subfields. This multiplication algorithm has quasilinear complexity in the dimension of the field, as it makes use of the fast Discrete Cosine Transform (DCT). Our approach assumes that the two input polynomials are given in a basis of Chebyshev-like polynomials, in contrast to the customary power basis. To validate this assumption, we prove that the change of basis from the power basis to the Chebyshev-like basis can be computed with $\mathcal{O}(n \log n)$ arithmetic operations, where $n$ is the problem dimension. Finally, we provide a heuristic and theoretical comparison of the vulnerability to some attacks for the $p$-th cyclotomic field versus the maximal totally real subextension of the $4p$-th cyclotomic field for a reasonable set of parameters of cryptographic size.<|reference_end|>
arxiv
@article{ahola2024fast, title={Fast Multiplication and the PLWE-RLWE Equivalence for an Infinite Family of Cyclotomic Subextensions}, author={Joonas Ahola, Iv'an Blanco-Chac'on, Wilmar Bola~nos, Antti Haavikko, Camilla Hollanti, Rodrigo Mart'in S'anchez-Ledesma}, journal={arXiv preprint arXiv:2410.00792}, year={2024}, archivePrefix={arXiv}, eprint={2410.00792}, primaryClass={cs.CR math.NT} }
ahola2024fast
arxiv-664138
2410.00796
Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks
<|reference_start|>Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks: Power system operators must ensure that dispatch decisions remain feasible in case of grid outages or contingencies to prevent cascading failures and ensure reliable operation. However, checking the feasibility of all $N - k$ contingencies -- every possible simultaneous failure of $k$ grid components -- is computationally intractable for even small $k$, requiring system operators to resort to heuristic screening methods. Because of the increase in uncertainty and changes in system behaviors, heuristic lists might not include all relevant contingencies, generating false negatives in which unsafe scenarios are misclassified as safe. In this work, we propose to use input-convex neural networks (ICNNs) for contingency screening. We show that ICNN reliability can be determined by solving a convex optimization problem, and by scaling model weights using this problem as a differentiable optimization layer during training, we can learn an ICNN classifier that is both data-driven and has provably guaranteed reliability. Namely, our method can ensure a zero false negative rate. We empirically validate this methodology in a case study on the IEEE 39-bus test network, observing that it yields substantial (10-20x) speedups while having excellent classification accuracy.<|reference_end|>
arxiv
@article{christianson2024fast, title={Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks}, author={Nicolas Christianson, Wenqi Cui, Steven Low, Weiwei Yang, Baosen Zhang}, journal={arXiv preprint arXiv:2410.00796}, year={2024}, archivePrefix={arXiv}, eprint={2410.00796}, primaryClass={eess.SY cs.LG cs.SY math.OC} }
christianson2024fast
arxiv-664139
2410.00801
Understanding Data Movement in AMD Multi-GPU Systems with Infinity Fabric
<|reference_start|>Understanding Data Movement in AMD Multi-GPU Systems with Infinity Fabric: Modern GPU systems are constantly evolving to meet the needs of computing-intensive applications in scientific and machine learning domains. However, there is typically a gap between the hardware capacity and the achievable application performance. This work aims to provide a better understanding of the Infinity Fabric interconnects on AMD GPUs and CPUs. We propose a test and evaluation methodology for characterizing the performance of data movements on multi-GPU systems, stressing different communication options on AMD MI250X GPUs, including point-to-point and collective communication, and memory allocation strategies between GPUs, as well as the host CPU. In a single-node setup with four GPUs, we show that direct peer-to-peer memory accesses between GPUs and utilization of the RCCL library outperform MPI-based solutions in terms of memory/communication latency and bandwidth. Our test and evaluation method serves as a base for validating memory and communication strategies on a system and improving applications on AMD multi-GPU computing systems.<|reference_end|>
arxiv
@article{schieffer2024understanding, title={Understanding Data Movement in AMD Multi-GPU Systems with Infinity Fabric}, author={Gabin Schieffer, Ruimin Shi, Stefano Markidis, Andreas Herten, Jennifer Faj, Ivy Peng}, journal={arXiv preprint arXiv:2410.00801}, year={2024}, archivePrefix={arXiv}, eprint={2410.00801}, primaryClass={cs.DC} }
schieffer2024understanding
arxiv-664140
2410.00807
WiGNet: Windowed Vision Graph Neural Network
<|reference_start|>WiGNet: Windowed Vision Graph Neural Network: In recent years, Graph Neural Networks (GNNs) have demonstrated strong adaptability to various real-world challenges, with architectures such as Vision GNN (ViG) achieving state-of-the-art performance in several computer vision tasks. However, their practical applicability is hindered by the computational complexity of constructing the graph, which scales quadratically with the image size. In this paper, we introduce a novel Windowed vision Graph neural Network (WiGNet) model for efficient image processing. WiGNet explores a different strategy from previous works by partitioning the image into windows and constructing a graph within each window. Therefore, our model uses graph convolutions instead of the typical 2D convolution or self-attention mechanism. WiGNet effectively manages computational and memory complexity for large image sizes. We evaluate our method in the ImageNet-1k benchmark dataset and test the adaptability of WiGNet using the CelebA-HQ dataset as a downstream task with higher-resolution images. In both of these scenarios, our method achieves competitive results compared to previous vision GNNs while keeping memory and computational complexity at bay. WiGNet offers a promising solution toward the deployment of vision GNNs in real-world applications. We publicly released the code at https://github.com/EIDOSLAB/WiGNet.<|reference_end|>
arxiv
@article{spadaro2024wignet:, title={WiGNet: Windowed Vision Graph Neural Network}, author={Gabriele Spadaro and Marco Grangetto and Attilio Fiandrotti and Enzo Tartaglione and Jhony H. Giraldo}, journal={arXiv preprint arXiv:2410.00807}, year={2024}, archivePrefix={arXiv}, eprint={2410.00807}, primaryClass={cs.CV cs.AI} }
spadaro2024wignet:
arxiv-664141
2410.00811
Improving curriculum learning for target speaker extraction with synthetic speakers
<|reference_start|>Improving curriculum learning for target speaker extraction with synthetic speakers: Target speaker extraction (TSE) aims to isolate individual speaker voices from complex speech environments. The effectiveness of TSE systems is often compromised when the speaker characteristics are similar to each other. Recent research has introduced curriculum learning (CL), in which TSE models are trained incrementally on speech samples of increasing complexity. In CL training, the model is first trained on samples with low speaker similarity between the target and interference speakers, and then on samples with high speaker similarity. To further improve CL, this paper uses a $k$-nearest neighbor-based voice conversion method to simulate and generate speech of diverse interference speakers, and then uses the generated data as part of the CL. Experiments demonstrate that training data based on synthetic speakers can effectively enhance the model's capabilities and significantly improve the performance of multiple TSE systems.<|reference_end|>
arxiv
@article{liu2024improving, title={Improving curriculum learning for target speaker extraction with synthetic speakers}, author={Yun Liu, Xuechen Liu, Junichi Yamagishi}, journal={arXiv preprint arXiv:2410.00811}, year={2024}, archivePrefix={arXiv}, eprint={2410.00811}, primaryClass={cs.SD eess.AS} }
liu2024improving
arxiv-664142
2410.00812
A generative framework to bridge data-driven models and scientific theories in language neuroscience
<|reference_start|>A generative framework to bridge data-driven models and scientific theories in language neuroscience: Representations from large language models are highly effective at predicting BOLD fMRI responses to language stimuli. However, these representations are largely opaque: it is unclear what features of the language stimulus drive the response in each brain area. We present generative explanation-mediated validation, a framework for generating concise explanations of language selectivity in the brain and then validating those explanations in follow-up experiments that use synthetic stimuli. This approach is successful at explaining selectivity both in individual voxels and cortical regions of interest (ROIs).We show that explanatory accuracy is closely related to the predictive power and stability of the underlying statistical models. These results demonstrate that LLMs can be used to bridge the widening gap between data-driven models and formal scientific theories.<|reference_end|>
arxiv
@article{antonello2024a, title={A generative framework to bridge data-driven models and scientific theories in language neuroscience}, author={Richard Antonello, Chandan Singh, Shailee Jain, Aliyah Hsu, Jianfeng Gao, Bin Yu, Alexander Huth}, journal={arXiv preprint arXiv:2410.00812}, year={2024}, archivePrefix={arXiv}, eprint={2410.00812}, primaryClass={cs.CL q-bio.NC} }
antonello2024a
arxiv-664143
2410.00817
Maximum entropy and quantized metric models for absolute category ratings
<|reference_start|>Maximum entropy and quantized metric models for absolute category ratings: The datasets of most image quality assessment studies contain ratings on a categorical scale with five levels, from bad (1) to excellent (5). For each stimulus, the number of ratings from 1 to 5 is summarized and given in the form of the mean opinion score. In this study, we investigate families of multinomial probability distributions parameterized by mean and variance that are used to fit the empirical rating distributions. To this end, we consider quantized metric models based on continuous distributions that model perceived stimulus quality on a latent scale. The probabilities for the rating categories are determined by quantizing the corresponding random variables using threshold values. Furthermore, we introduce a novel discrete maximum entropy distribution for a given mean and variance. We compare the performance of these models and the state of the art given by the generalized score distribution for two large data sets, KonIQ-10k and VQEG HDTV. Given an input distribution of ratings, our fitted two-parameter models predict unseen ratings better than the empirical distribution. In contrast to empirical ACR distributions and their discrete models, our continuous models can provide fine-grained estimates of quantiles of quality of experience that are relevant to service providers to satisfy a target fraction of the user population.<|reference_end|>
arxiv
@article{saupe2024maximum, title={Maximum entropy and quantized metric models for absolute category ratings}, author={Dietmar Saupe and Krzysztof Rusek and David H"agele and Daniel Weiskopf and Lucjan Janowski}, journal={arXiv preprint arXiv:2410.00817}, year={2024}, archivePrefix={arXiv}, eprint={2410.00817}, primaryClass={cs.MM eess.IV} }
saupe2024maximum
arxiv-664144
2410.00822
VHASR: A Multimodal Speech Recognition System With Vision Hotwords
<|reference_start|>VHASR: A Multimodal Speech Recognition System With Vision Hotwords: The image-based multimodal automatic speech recognition (ASR) model enhances speech recognition performance by incorporating audio-related image. However, some works suggest that introducing image information to model does not help improving ASR performance. In this paper, we propose a novel approach effectively utilizing audio-related image information and set up VHASR, a multimodal speech recognition system that uses vision as hotwords to strengthen the model's speech recognition capability. Our system utilizes a dual-stream architecture, which firstly transcribes the text on the two streams separately, and then combines the outputs. We evaluate the proposed model on four datasets: Flickr8k, ADE20k, COCO, and OpenImages. The experimental results show that VHASR can effectively utilize key information in images to enhance the model's speech recognition ability. Its performance not only surpasses unimodal ASR, but also achieves SOTA among existing image-based multimodal ASR.<|reference_end|>
arxiv
@article{hu2024vhasr:, title={VHASR: A Multimodal Speech Recognition System With Vision Hotwords}, author={Jiliang Hu, Zuchao Li, Ping Wang, Haojun Ai, Lefei Zhang, Hai Zhao}, journal={arXiv preprint arXiv:2410.00822}, year={2024}, archivePrefix={arXiv}, eprint={2410.00822}, primaryClass={cs.SD cs.CL eess.AS} }
hu2024vhasr:
arxiv-664145
2410.00823
Squeeze-and-Remember Block
<|reference_start|>Squeeze-and-Remember Block: Convolutional Neural Networks (CNNs) are important for many machine learning tasks. They are built with different types of layers: convolutional layers that detect features, dropout layers that help to avoid over-reliance on any single neuron, and residual layers that allow the reuse of features. However, CNNs lack a dynamic feature retention mechanism similar to the human brain's memory, limiting their ability to use learned information in new contexts. To bridge this gap, we introduce the "Squeeze-and-Remember" (SR) block, a novel architectural unit that gives CNNs dynamic memory-like functionalities. The SR block selectively memorizes important features during training, and then adaptively re-applies these features during inference. This improves the network's ability to make contextually informed predictions. Empirical results on ImageNet and Cityscapes datasets demonstrate the SR block's efficacy: integration into ResNet50 improved top-1 validation accuracy on ImageNet by 0.52% over dropout2d alone, and its application in DeepLab v3 increased mean Intersection over Union in Cityscapes by 0.20%. These improvements are achieved with minimal computational overhead. This show the SR block's potential to enhance the capabilities of CNNs in image processing tasks.<|reference_end|>
arxiv
@article{cakaj2024squeeze-and-remember, title={Squeeze-and-Remember Block}, author={Rinor Cakaj, Jens Mehnert, Bin Yang}, journal={arXiv preprint arXiv:2410.00823}, year={2024}, archivePrefix={arXiv}, eprint={2410.00823}, primaryClass={cs.CV cs.LG} }
cakaj2024squeeze-and-remember
arxiv-664146
2410.00825
Developing a BLAS library for the AMD AI Engine
<|reference_start|>Developing a BLAS library for the AMD AI Engine: Spatial (dataflow) computer architectures can mitigate the control and performance overhead of classical von Neumann architectures such as traditional CPUs. Driven by the popularity of Machine Learning (ML) workloads, spatial devices are being marketed as ML inference accelerators. Despite providing a rich software ecosystem for ML practitioners, their adoption in other scientific domains is hindered by the steep learning curve and lack of reusable software, which makes them inaccessible to non-experts. We present our ongoing project AIEBLAS, an open-source, expandable implementation of Basic Linear Algebra Routines (BLAS) for the AMD AI Engine. Numerical routines are designed to be easily reusable, customized, and composed in dataflow programs, leveraging the characteristics of the targeted device without requiring the user to deeply understand the underlying hardware and programming model.<|reference_end|>
arxiv
@article{laan2024developing, title={Developing a BLAS library for the AMD AI Engine}, author={Tristan Laan, Tiziano De Matteis}, journal={arXiv preprint arXiv:2410.00825}, year={2024}, archivePrefix={arXiv}, eprint={2410.00825}, primaryClass={cs.DC cs.ET} }
laan2024developing
arxiv-664147
2410.00833
Geometric shape matching for recovering protein conformations from single-particle Cryo-EM data
<|reference_start|>Geometric shape matching for recovering protein conformations from single-particle Cryo-EM data: We address recovery of the three-dimensional backbone structure of single polypeptide proteins from single-particle cryo-electron microscopy (Cryo-SPA) data. Cryo-SPA produces noisy tomographic projections of electrostatic potentials of macromolecules. From these projections, we use methods from shape analysis to recover the three-dimensional backbone structure. Thus, we view the reconstruction problem as an indirect matching problem, where a point cloud representation of the protein backbone is deformed to match 2D tomography data. The deformations are obtained via the action of a matrix Lie group. By selecting a deformation energy, the optimality conditions are obtained, which lead to computational algorithms for optimal deformations. We showcase our approach on synthetic data, for which we recover the three-dimensional structure of the backbone.<|reference_end|>
arxiv
@article{jansson2024geometric, title={Geometric shape matching for recovering protein conformations from single-particle Cryo-EM data}, author={Erik Jansson, Jonathan Krook, Klas Modin, Ozan "Oktem}, journal={arXiv preprint arXiv:2410.00833}, year={2024}, archivePrefix={arXiv}, eprint={2410.00833}, primaryClass={q-bio.BM cs.NA math.DG math.NA math.OC} }
jansson2024geometric
arxiv-664148
2410.00835
Solving High-Dimensional Partial Integral Differential Equations: The Finite Expression Method
<|reference_start|>Solving High-Dimensional Partial Integral Differential Equations: The Finite Expression Method: In this paper, we introduce a new finite expression method (FEX) to solve high-dimensional partial integro-differential equations (PIDEs). This approach builds upon the original FEX and its inherent advantages with new advances: 1) A novel method of parameter grouping is proposed to reduce the number of coefficients in high-dimensional function approximation; 2) A Taylor series approximation method is implemented to significantly improve the computational efficiency and accuracy of the evaluation of the integral terms of PIDEs. The new FEX based method, denoted FEX-PG to indicate the addition of the parameter grouping (PG) step to the algorithm, provides both high accuracy and interpretable numerical solutions, with the outcome being an explicit equation that facilitates intuitive understanding of the underlying solution structures. These features are often absent in traditional methods, such as finite element methods (FEM) and finite difference methods, as well as in deep learning-based approaches. To benchmark our method against recent advances, we apply the new FEX-PG to solve benchmark PIDEs in the literature. In high-dimensional settings, FEX-PG exhibits strong and robust performance, achieving relative errors on the order of single precision machine epsilon.<|reference_end|>
arxiv
@article{hardwick2024solving, title={Solving High-Dimensional Partial Integral Differential Equations: The Finite Expression Method}, author={Gareth Hardwick, Senwei Liang, Haizhao Yang}, journal={arXiv preprint arXiv:2410.00835}, year={2024}, archivePrefix={arXiv}, eprint={2410.00835}, primaryClass={math.NA cs.LG cs.NA} }
hardwick2024solving
arxiv-664149
2410.00836
Towards Fairness and Privacy: A Novel Data Pre-processing Optimization Framework for Non-binary Protected Attributes
<|reference_start|>Towards Fairness and Privacy: A Novel Data Pre-processing Optimization Framework for Non-binary Protected Attributes: The reason behind the unfair outcomes of AI is often rooted in biased datasets. Therefore, this work presents a framework for addressing fairness by debiasing datasets containing a (non-)binary protected attribute. The framework proposes a combinatorial optimization problem where heuristics such as genetic algorithms can be used to solve for the stated fairness objectives. The framework addresses this by finding a data subset that minimizes a certain discrimination measure. Depending on a user-defined setting, the framework enables different use cases, such as data removal, the addition of synthetic data, or exclusive use of synthetic data. The exclusive use of synthetic data in particular enhances the framework's ability to preserve privacy while optimizing for fairness. In a comprehensive evaluation, we demonstrate that under our framework, genetic algorithms can effectively yield fairer datasets compared to the original data. In contrast to prior work, the framework exhibits a high degree of flexibility as it is metric- and task-agnostic, can be applied to both binary or non-binary protected attributes, and demonstrates efficient runtime.<|reference_end|>
arxiv
@article{duong2024towards, title={Towards Fairness and Privacy: A Novel Data Pre-processing Optimization Framework for Non-binary Protected Attributes}, author={Manh Khoi Duong and Stefan Conrad}, journal={arXiv preprint arXiv:2410.00836}, year={2024}, doi={10.1007/978-981-99-8696-5}, archivePrefix={arXiv}, eprint={2410.00836}, primaryClass={cs.LG cs.CY} }
duong2024towards
arxiv-664150
2410.00838
Better Boosting of Communication Oracles, or Not
<|reference_start|>Better Boosting of Communication Oracles, or Not: Suppose we have a two-party communication protocol for $f$ which allows the parties to make queries to an oracle computing $g$; for example, they may query an Equality oracle. To translate this protocol into a randomized protocol, we must replace the oracle with a randomized subroutine for solving $g$. If $q$ queries are made, the standard technique requires that we boost the error of each subroutine down to $O(1/q)$, leading to communication complexity which grows as $q \log q$. For which oracles $g$ can this naive boosting technique be improved? We focus on the oracles which can be computed by constant-cost randomized protocols, and show that the naive boosting strategy can be improved for the Equality oracle but not the 1-Hamming Distance oracle. Two surprising consequences are (1) a new example of a problem where the cost of computing $k$ independent copies grows superlinear in $k$, drastically simplifying the only previous example due to Blais & Brody (CCC 2019); and (2) a new proof that Equality is not complete for the class of constant-cost randomized communication (Harms, Wild, & Zamaraev, STOC 2022; Hambardzumyan, Hatami, & Hatami, Israel Journal of Mathematics 2022).<|reference_end|>
arxiv
@article{harms2024better, title={Better Boosting of Communication Oracles, or Not}, author={Nathaniel Harms, Artur Riazanov}, journal={arXiv preprint arXiv:2410.00838}, year={2024}, archivePrefix={arXiv}, eprint={2410.00838}, primaryClass={cs.CC cs.DS} }
harms2024better
arxiv-664151
2410.00841
Diffusion-Informed Probabilistic Contact Search for Multi-Finger Manipulation
<|reference_start|>Diffusion-Informed Probabilistic Contact Search for Multi-Finger Manipulation: Planning contact-rich interactions for multi-finger manipulation is challenging due to the high-dimensionality and hybrid nature of dynamics. Recent advances in data-driven methods have shown promise, but are sensitive to the quality of training data. Combining learning with classical methods like trajectory optimization and search adds additional structure to the problem and domain knowledge in the form of constraints, which can lead to outperforming the data on which models are trained. We present Diffusion-Informed Probabilistic Contact Search (DIPS), which uses an A* search to plan a sequence of contact modes informed by a diffusion model. We train the diffusion model on a dataset of demonstrations consisting of contact modes and trajectories generated by a trajectory optimizer given those modes. In addition, we use a particle filter-inspired method to reason about variability in diffusion sampling arising from model error, estimating likelihoods of trajectories using a learned discriminator. We show that our method outperforms ablations that do not reason about variability and can plan contact sequences that outperform those found in training data across multiple tasks. We evaluate on simulated tabletop card sliding and screwdriver turning tasks, as well as the screwdriver task in hardware to show that our combined learning and planning approach transfers to the real world.<|reference_end|>
arxiv
@article{kumar2024diffusion-informed, title={Diffusion-Informed Probabilistic Contact Search for Multi-Finger Manipulation}, author={Abhinav Kumar (1), Thomas Power (1), Fan Yang (1), Sergio Aguilera Marinovic (2), Soshi Iba (2), Rana Soltani Zarrin (2), Dmitry Berenson (1) ((1) Robotics Department, University of Michigan, (2) Honda Research Institute USA)}, journal={arXiv preprint arXiv:2410.00841}, year={2024}, archivePrefix={arXiv}, eprint={2410.00841}, primaryClass={cs.RO} }
kumar2024diffusion-informed
arxiv-664152
2410.00844
Learning Stochastic Dynamics from Snapshots through Regularized Unbalanced Optimal Transport
<|reference_start|>Learning Stochastic Dynamics from Snapshots through Regularized Unbalanced Optimal Transport: Reconstructing dynamics using samples from sparsely time-resolved snapshots is an important problem in both natural sciences and machine learning. Here, we introduce a new deep learning approach for solving regularized unbalanced optimal transport (RUOT) and inferring continuous unbalanced stochastic dynamics from observed snapshots. Based on the RUOT form, our method models these dynamics without requiring prior knowledge of growth and death processes or additional information, allowing them to be learnt directly from data. Theoretically, we explore the connections between the RUOT and Schr\"odinger bridge problem and discuss the key challenges and potential solutions. The effectiveness of our method is demonstrated with a synthetic gene regulatory network. Compared with other methods, our approach accurately identifies growth and transition patterns, eliminates false transitions, and constructs the Waddington developmental landscape.<|reference_end|>
arxiv
@article{zhang2024learning, title={Learning Stochastic Dynamics from Snapshots through Regularized Unbalanced Optimal Transport}, author={Zhenyi Zhang, Tiejun Li, Peijie Zhou}, journal={arXiv preprint arXiv:2410.00844}, year={2024}, archivePrefix={arXiv}, eprint={2410.00844}, primaryClass={cs.LG math.OC physics.comp-ph q-bio.QM} }
zhang2024learning
arxiv-664153
2410.00846
Why Are Learned Indexes So Effective but Sometimes Ineffective?
<|reference_start|>Why Are Learned Indexes So Effective but Sometimes Ineffective?: Learned indexes have attracted significant research interest due to their ability to offer better space-time trade-offs compared to traditional B+-tree variants. Among various learned indexes, the PGM-Index based on error-bounded piecewise linear approximation is an elegant data structure that has demonstrated \emph{provably} superior performance over conventional B+-tree indexes. In this paper, we explore two interesting research questions regarding the PGM-Index: (a) \emph{Why are PGM-Indexes theoretically effective?} and (b) \emph{Why do PGM-Indexes underperform in practice?} For question~(a), we first prove that, for a set of $N$ sorted keys, the PGM-Index can, with high probability, achieve a lookup time of $O(\log\log N)$ while using $O(N)$ space. To the best of our knowledge, this is the \textbf{tightest bound} for learned indexes to date. For question~(b), we identify that querying PGM-Indexes is highly memory-bound, where the internal error-bounded search operations often become the bottleneck. To fill the performance gap, we propose PGM++, a \emph{simple yet effective} extension to the original PGM-Index that employs a mixture of different search strategies, with hyper-parameters automatically tuned through a calibrated cost model. Extensive experiments on real workloads demonstrate that PGM++ establishes a new Pareto frontier. At comparable space costs, PGM++ speeds up index lookup queries by up to $\mathbf{2.31\times}$ and $\mathbf{1.56\times}$ when compared to the original PGM-Index and state-of-the-art learned indexes.<|reference_end|>
arxiv
@article{liu2024why, title={Why Are Learned Indexes So Effective but Sometimes Ineffective?}, author={Qiyu Liu, Siyuan Han, Yanlin Qi, Jingshu Peng, Jin Li, Longlong Lin, Lei Chen}, journal={arXiv preprint arXiv:2410.00846}, year={2024}, archivePrefix={arXiv}, eprint={2410.00846}, primaryClass={cs.DB} }
liu2024why
arxiv-664154
2410.00847
Uncertainty-aware Reward Model: Teaching Reward Models to Know What is Unknown
<|reference_start|>Uncertainty-aware Reward Model: Teaching Reward Models to Know What is Unknown: Reward models (RM) play a critical role in aligning generations of large language models (LLM) to human expectations. However, prevailing RMs fail to capture the stochasticity within human preferences and cannot effectively evaluate the reliability of reward predictions. To address these issues, we propose Uncertain-aware RM (URM) and Uncertain-aware RM Ensemble (URME) to incorporate and manage uncertainty in reward modeling. URM can model the distribution of disentangled attributes within human preferences, while URME quantifies uncertainty through discrepancies in the ensemble, thereby identifying potential lack of knowledge during reward evaluation. Experiment results indicate that the proposed URM achieves state-of-the-art performance compared to models with the same size, demonstrating the effectiveness of modeling uncertainty within human preferences. Furthermore, empirical results show that through uncertainty quantification, URM and URME can identify unreliable predictions to improve the quality of reward evaluations.<|reference_end|>
arxiv
@article{lou2024uncertainty-aware, title={Uncertainty-aware Reward Model: Teaching Reward Models to Know What is Unknown}, author={Xingzhou Lou, Dong Yan, Wei Shen, Yuzi Yan, Jian Xie, Junge Zhang}, journal={arXiv preprint arXiv:2410.00847}, year={2024}, archivePrefix={arXiv}, eprint={2410.00847}, primaryClass={cs.LG} }
lou2024uncertainty-aware
arxiv-664155
2410.00848
An EM Gradient Algorithm for Mixture Models with Components Derived from the Manly Transformation
<|reference_start|>An EM Gradient Algorithm for Mixture Models with Components Derived from the Manly Transformation: Zhu and Melnykov (2018) develop a model to fit mixture models when the components are derived from the Manly transformation. Their EM algorithm utilizes Nelder-Mead optimization in the M-step to update the skew parameter, $\boldsymbol{\lambda}_g$. An alternative EM gradient algorithm is proposed, using one step of Newton's method, when initial estimates for the model parameters are good.<|reference_end|>
arxiv
@article{clark2024an, title={An EM Gradient Algorithm for Mixture Models with Components Derived from the Manly Transformation}, author={Katharine M. Clark and Paul D. McNicholas}, journal={arXiv preprint arXiv:2410.00848}, year={2024}, archivePrefix={arXiv}, eprint={2410.00848}, primaryClass={stat.ML cs.LG} }
clark2024an
arxiv-664156
2410.00849
Energy-Quality-aware Variable Framerate Pareto-Front for Adaptive Video Streaming
<|reference_start|>Energy-Quality-aware Variable Framerate Pareto-Front for Adaptive Video Streaming: Optimizing framerate for a given bitrate-spatial resolution pair in adaptive video streaming is essential to maintain perceptual quality while considering decoding complexity. Low framerates at low bitrates reduce compression artifacts and decrease decoding energy. We propose a novel method, Decoding-complexity aware Framerate Prediction (DECODRA), which employs a Variable Framerate Pareto-front approach to predict an optimized framerate that minimizes decoding energy under quality degradation constraints. DECODRA dynamically adjusts the framerate based on current bitrate and spatial resolution, balancing trade-offs between framerate, perceptual quality, and decoding complexity. Extensive experimentation with the Inter-4K dataset demonstrates DECODRA's effectiveness, yielding an average decoding energy reduction of up to 13.45%, with minimal VMAF reduction of 0.33 points at a low-quality degradation threshold, compared to the default 60 fps encoding. Even at an aggressive threshold, DECODRA achieves significant energy savings of 13.45% while only reducing VMAF by 2.11 points. In this way, DECODRA extends mobile device battery life and reduces the energy footprint of streaming services by providing a more energy-efficient video streaming pipeline.<|reference_end|>
arxiv
@article{rajendran2024energy-quality-aware, title={Energy-Quality-aware Variable Framerate Pareto-Front for Adaptive Video Streaming}, author={Prajit T Rajendran and Samira Afzal and Vignesh V Menon and Christian Timmerer}, journal={arXiv preprint arXiv:2410.00849}, year={2024}, archivePrefix={arXiv}, eprint={2410.00849}, primaryClass={cs.MM} }
rajendran2024energy-quality-aware
arxiv-664157
2410.00857
Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis
<|reference_start|>Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis: Retrieval Augmented Generation (RAG) is a widely used approach for leveraging external context in several natural language applications such as question answering and information retrieval. Yet, the exact nature in which a Language Model (LM) leverages this non-parametric memory or retrieved context isn't clearly understood. This paper mechanistically examines the RAG pipeline to highlight that LMs demonstrate a "shortcut'' effect and have a strong bias towards utilizing the retrieved context to answer questions, while relying minimally on model priors. We propose (a) Causal Mediation Analysis; for proving that parametric memory is minimally utilized when answering a question and (b) Attention Contributions and Knockouts for showing the last token residual stream do not get enriched from the subject token in the question, but gets enriched from tokens of RAG-context. We find this pronounced "shortcut'' behaviour to be true across both LLMs (e.g.,LlaMa) and SLMs (e.g., Phi)<|reference_end|>
arxiv
@article{ghosh2024quantifying, title={Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis}, author={Reshmi Ghosh, Rahul Seetharaman, Hitesh Wadhwa, Somyaa Aggarwal, Samyadeep Basu, Soundararajan Srinivasan, Wenlong Zhao, Shreyas Chaudhari, Ehsan Aghazadeh}, journal={arXiv preprint arXiv:2410.00857}, year={2024}, archivePrefix={arXiv}, eprint={2410.00857}, primaryClass={cs.CL} }
ghosh2024quantifying
arxiv-664158
2410.00859
Improved Sample Complexity of Imitation Learning for Barrier Model Predictive Control
<|reference_start|>Improved Sample Complexity of Imitation Learning for Barrier Model Predictive Control: Recent work in imitation learning has shown that having an expert controller that is both suitably smooth and stable enables stronger guarantees on the performance of the learned controller. However, constructing such smoothed expert controllers for arbitrary systems remains challenging, especially in the presence of input and state constraints. As our primary contribution, we show how such a smoothed expert can be designed for a general class of systems using a log-barrier-based relaxation of a standard Model Predictive Control (MPC) optimization problem. Improving upon our previous work, we show that barrier MPC achieves theoretically optimal error-to-smoothness tradeoff along some direction. At the core of this theoretical guarantee on smoothness is an improved lower bound we prove on the optimality gap of the analytic center associated with a convex Lipschitz function, which we believe could be of independent interest. We validate our theoretical findings via experiments, demonstrating the merits of our smoothing approach over randomized smoothing.<|reference_end|>
arxiv
@article{pfrommer2024improved, title={Improved Sample Complexity of Imitation Learning for Barrier Model Predictive Control}, author={Daniel Pfrommer, Swati Padmanabhan, Kwangjun Ahn, Jack Umenberger, Tobia Marcucci, Zakaria Mhammedi, Ali Jadbabaie}, journal={arXiv preprint arXiv:2410.00859}, year={2024}, archivePrefix={arXiv}, eprint={2410.00859}, primaryClass={eess.SY cs.LG cs.SY} }
pfrommer2024improved
arxiv-664159
2410.00860
Enhancing Web Spam Detection through a Blockchain-Enabled Crowdsourcing Mechanism
<|reference_start|>Enhancing Web Spam Detection through a Blockchain-Enabled Crowdsourcing Mechanism: The proliferation of spam on the Web has necessitated the development of machine learning models to automate their detection. However, the dynamic nature of spam and the sophisticated evasion techniques employed by spammers often lead to low accuracy in these models. Traditional machine-learning approaches struggle to keep pace with spammers' constantly evolving tactics, resulting in a persistent challenge to maintain high detection rates. To address this, we propose blockchain-enabled incentivized crowdsourcing as a novel solution to enhance spam detection systems. We create an incentive mechanism for data collection and labeling by leveraging blockchain's decentralized and transparent framework. Contributors are rewarded for accurate labels and penalized for inaccuracies, ensuring high-quality data. A smart contract governs the submission and evaluation process, with participants staking cryptocurrency as collateral to guarantee integrity. Simulations show that incentivized crowdsourcing improves data quality, leading to more effective machine-learning models for spam detection. This approach offers a scalable and adaptable solution to the challenges of traditional methods.<|reference_end|>
arxiv
@article{kader2024enhancing, title={Enhancing Web Spam Detection through a Blockchain-Enabled Crowdsourcing Mechanism}, author={Noah Kader and Inwon Kang and Oshani Seneviratne}, journal={arXiv preprint arXiv:2410.00860}, year={2024}, archivePrefix={arXiv}, eprint={2410.00860}, primaryClass={cs.CR cs.SI} }
kader2024enhancing
arxiv-664160
2410.00862
Timber! Poisoning Decision Trees
<|reference_start|>Timber! Poisoning Decision Trees: We present Timber, the first white-box poisoning attack targeting decision trees. Timber is based on a greedy attack strategy leveraging sub-tree retraining to efficiently estimate the damage performed by poisoning a given training instance. The attack relies on a tree annotation procedure which enables sorting training instances so that they are processed in increasing order of computational cost of sub-tree retraining. This sorting yields a variant of Timber supporting an early stopping criterion designed to make poisoning attacks more efficient and feasible on larger datasets. We also discuss an extension of Timber to traditional random forest models, which is useful because decision trees are normally combined into ensembles to improve their predictive power. Our experimental evaluation on public datasets shows that our attacks outperform existing baselines in terms of effectiveness, efficiency or both. Moreover, we show that two representative defenses can mitigate the effect of our attacks, but fail at effectively thwarting them.<|reference_end|>
arxiv
@article{calzavara2024timber!, title={Timber! Poisoning Decision Trees}, author={Stefano Calzavara, Lorenzo Cazzaro, Massimo Vettori}, journal={arXiv preprint arXiv:2410.00862}, year={2024}, archivePrefix={arXiv}, eprint={2410.00862}, primaryClass={cs.LG cs.CR stat.ML} }
calzavara2024timber!
arxiv-664161
2410.00863
On the Implications of Verbose LLM Outputs: A Case Study in Translation Evaluation
<|reference_start|>On the Implications of Verbose LLM Outputs: A Case Study in Translation Evaluation: This paper investigates the impact of verbose LLM translations on evaluation. We first demonstrate the prevalence of this behavior across several LLM outputs drawn from the WMT 2024 general shared task on machine translation. We then identify the primary triggers of verbosity, including safety, copyright concerns, and insufficient context in short input queries. Finally, we show that ignoring this behavior unfairly penalizes more verbose LLMs according to both automatic and human evaluations, highlighting the need to address this issue for more accurate future evaluations.<|reference_end|>
arxiv
@article{briakou2024on, title={On the Implications of Verbose LLM Outputs: A Case Study in Translation Evaluation}, author={Eleftheria Briakou, Zhongtao Liu, Colin Cherry, Markus Freitag}, journal={arXiv preprint arXiv:2410.00863}, year={2024}, archivePrefix={arXiv}, eprint={2410.00863}, primaryClass={cs.CL} }
briakou2024on
arxiv-664162
2410.00866
"I don't trust them": Exploring Perceptions of Fact-checking Entities for Flagging Online Misinformation
<|reference_start|>"I don't trust them": Exploring Perceptions of Fact-checking Entities for Flagging Online Misinformation: The spread of misinformation through online social media platforms has had substantial societal consequences. As a result, platforms have introduced measures to alert users of news content that may be misleading or contain inaccuracies as a means to discourage them from sharing it. These interventions sometimes cite external sources, such as fact-checking organizations and news outlets, for providing assessments related to the accuracy of the content. However, it is unclear whether users trust the assessments provided by these entities and whether perceptions vary across different topics of news. We conducted an online study with 655 US participants to explore user perceptions of eight categories of fact-checking entities across two misinformation topics, as well as factors that may impact users' perceptions. We found that participants' opinions regarding the trustworthiness and bias of the entities varied greatly, aligning largely with their political preference. However, just the presence of a fact-checking label appeared to discourage participants from sharing the headlines studied. Our results hint at the need for further exploring fact-checking entities that may be perceived as neutral, as well as the potential for incorporating multiple assessments in such labels.<|reference_end|>
arxiv
@article{habib2024"i, title={"I don't trust them": Exploring Perceptions of Fact-checking Entities for Flagging Online Misinformation}, author={Hana Habib, Sara Elsharawy, Rifat Rahman}, journal={arXiv preprint arXiv:2410.00866}, year={2024}, archivePrefix={arXiv}, eprint={2410.00866}, primaryClass={cs.HC cs.CY cs.SI} }
habib2024"i
arxiv-664163
2410.00868
Fine-Grained Gradient Restriction: A Simple Approach for Mitigating Catastrophic Forgetting
<|reference_start|>Fine-Grained Gradient Restriction: A Simple Approach for Mitigating Catastrophic Forgetting: A fundamental challenge in continual learning is to balance the trade-off between learning new tasks and remembering the previously acquired knowledge. Gradient Episodic Memory (GEM) achieves this balance by utilizing a subset of past training samples to restrict the update direction of the model parameters. In this work, we start by analyzing an often overlooked hyper-parameter in GEM, the memory strength, which boosts the empirical performance by further constraining the update direction. We show that memory strength is effective mainly because it improves GEM's generalization ability and therefore leads to a more favorable trade-off. By this finding, we propose two approaches that more flexibly constrain the update direction. Our methods are able to achieve uniformly better Pareto Frontiers of remembering old and learning new knowledge than using memory strength. We further propose a computationally efficient method to approximately solve the optimization problem with more constraints.<|reference_end|>
arxiv
@article{liu2024fine-grained, title={Fine-Grained Gradient Restriction: A Simple Approach for Mitigating Catastrophic Forgetting}, author={Bo Liu, Mao Ye, Peter Stone, Qiang Liu}, journal={arXiv preprint arXiv:2410.00868}, year={2024}, archivePrefix={arXiv}, eprint={2410.00868}, primaryClass={cs.LG} }
liu2024fine-grained
arxiv-664164
2410.00871
MAP: Unleashing Hybrid Mamba-Transformer Vision Backbone's Potential with Masked Autoregressive Pretraining
<|reference_start|>MAP: Unleashing Hybrid Mamba-Transformer Vision Backbone's Potential with Masked Autoregressive Pretraining: Mamba has achieved significant advantages in long-context modeling and autoregressive tasks, but its scalability with large parameters remains a major limitation in vision applications. pretraining is a widely used strategy to enhance backbone model performance. Although the success of Masked Autoencoder in Transformer pretraining is well recognized, it does not significantly improve Mamba's visual learning performance. We found that using the correct autoregressive pretraining can significantly boost the performance of the Mamba architecture. Based on this analysis, we propose Masked Autoregressive Pretraining (MAP) to pretrain a hybrid Mamba-Transformer vision backbone network. This strategy combines the strengths of both MAE and Autoregressive pretraining, improving the performance of Mamba and Transformer modules within a unified paradigm. Additionally, in terms of integrating Mamba and Transformer modules, we empirically found that inserting Transformer layers at regular intervals within Mamba layers can significantly enhance downstream task performance. Experimental results show that both the pure Mamba architecture and the hybrid Mamba-Transformer vision backbone network pretrained with MAP significantly outperform other pretraining strategies, achieving state-of-the-art performance. We validate the effectiveness of the method on both 2D and 3D datasets and provide detailed ablation studies to support the design choices for each component.<|reference_end|>
arxiv
@article{liu2024map:, title={MAP: Unleashing Hybrid Mamba-Transformer Vision Backbone's Potential with Masked Autoregressive Pretraining}, author={Yunze Liu, Li Yi}, journal={arXiv preprint arXiv:2410.00871}, year={2024}, archivePrefix={arXiv}, eprint={2410.00871}, primaryClass={cs.CV cs.AI} }
liu2024map:
arxiv-664165
2410.00872
Do Music Generation Models Encode Music Theory?
<|reference_start|>Do Music Generation Models Encode Music Theory?: Music foundation models possess impressive music generation capabilities. When people compose music, they may infuse their understanding of music into their work, by using notes and intervals to craft melodies, chords to build progressions, and tempo to create a rhythmic feel. To what extent is this true of music generation models? More specifically, are fundamental Western music theory concepts observable within the "inner workings" of these models? Recent work proposed leveraging latent audio representations from music generation models towards music information retrieval tasks (e.g. genre classification, emotion recognition), which suggests that high-level musical characteristics are encoded within these models. However, probing individual music theory concepts (e.g. tempo, pitch class, chord quality) remains under-explored. Thus, we introduce SynTheory, a synthetic MIDI and audio music theory dataset, consisting of tempos, time signatures, notes, intervals, scales, chords, and chord progressions concepts. We then propose a framework to probe for these music theory concepts in music foundation models (Jukebox and MusicGen) and assess how strongly they encode these concepts within their internal representations. Our findings suggest that music theory concepts are discernible within foundation models and that the degree to which they are detectable varies by model size and layer.<|reference_end|>
arxiv
@article{wei2024do, title={Do Music Generation Models Encode Music Theory?}, author={Megan Wei, Michael Freeman, Chris Donahue, Chen Sun}, journal={arXiv preprint arXiv:2410.00872}, year={2024}, archivePrefix={arXiv}, eprint={2410.00872}, primaryClass={cs.SD cs.AI cs.CL cs.LG eess.AS} }
wei2024do
arxiv-664166
2410.00873
Aligning Human and LLM Judgments: Insights from EvalAssist on Task-Specific Evaluations and AI-assisted Assessment Strategy Preferences
<|reference_start|>Aligning Human and LLM Judgments: Insights from EvalAssist on Task-Specific Evaluations and AI-assisted Assessment Strategy Preferences: Evaluation of large language model (LLM) outputs requires users to make critical judgments about the best outputs across various configurations. This process is costly and takes time given the large amounts of data. LLMs are increasingly used as evaluators to filter training data, evaluate model performance or assist human evaluators with detailed assessments. To support this process, effective front-end tools are critical for evaluation. Two common approaches for using LLMs as evaluators are direct assessment and pairwise comparison. In our study with machine learning practitioners (n=15), each completing 6 tasks yielding 131 evaluations, we explore how task-related factors and assessment strategies influence criteria refinement and user perceptions. Findings show that users performed more evaluations with direct assessment by making criteria task-specific, modifying judgments, and changing the evaluator model. We conclude with recommendations for how systems can better support interactions in LLM-assisted evaluations.<|reference_end|>
arxiv
@article{ashktorab2024aligning, title={Aligning Human and LLM Judgments: Insights from EvalAssist on Task-Specific Evaluations and AI-assisted Assessment Strategy Preferences}, author={Zahra Ashktorab, Michael Desmond, Qian Pan, James M. Johnson, Martin Santillan Cooper, Elizabeth M. Daly, Rahul Nair, Tejaswini Pedapati, Swapnaja Achintalwar, and Werner Geyer}, journal={arXiv preprint arXiv:2410.00873}, year={2024}, archivePrefix={arXiv}, eprint={2410.00873}, primaryClass={cs.HC} }
ashktorab2024aligning
arxiv-664167
2410.00875
Review of blockchain application with Graph Neural Networks, Graph Convolutional Networks and Convolutional Neural Networks
<|reference_start|>Review of blockchain application with Graph Neural Networks, Graph Convolutional Networks and Convolutional Neural Networks: This paper reviews the applications of Graph Neural Networks (GNNs), Graph Convolutional Networks (GCNs), and Convolutional Neural Networks (CNNs) in blockchain technology. As the complexity and adoption of blockchain networks continue to grow, traditional analytical methods are proving inadequate in capturing the intricate relationships and dynamic behaviors of decentralized systems. To address these limitations, deep learning models such as GNNs, GCNs, and CNNs offer robust solutions by leveraging the unique graph-based and temporal structures inherent in blockchain architectures. GNNs and GCNs, in particular, excel in modeling the relational data of blockchain nodes and transactions, making them ideal for applications such as fraud detection, transaction verification, and smart contract analysis. Meanwhile, CNNs can be adapted to analyze blockchain data when represented as structured matrices, revealing hidden temporal and spatial patterns in transaction flows. This paper explores how these models enhance the efficiency, security, and scalability of both linear blockchains and Directed Acyclic Graph (DAG)-based systems, providing a comprehensive overview of their strengths and future research directions. By integrating advanced neural network techniques, we aim to demonstrate the potential of these models in revolutionizing blockchain analytics, paving the way for more sophisticated decentralized applications and improved network performance.<|reference_end|>
arxiv
@article{ancelotti2024review, title={Review of blockchain application with Graph Neural Networks, Graph Convolutional Networks and Convolutional Neural Networks}, author={Amy Ancelotti, Claudia Liason}, journal={arXiv preprint arXiv:2410.00875}, year={2024}, archivePrefix={arXiv}, eprint={2410.00875}, primaryClass={cs.LG} }
ancelotti2024review
arxiv-664168
2410.00876
Replacing Paths with Connection-Biased Attention for Knowledge Graph Completion
<|reference_start|>Replacing Paths with Connection-Biased Attention for Knowledge Graph Completion: Knowledge graph (KG) completion aims to identify additional facts that can be inferred from the existing facts in the KG. Recent developments in this field have explored this task in the inductive setting, where at test time one sees entities that were not present during training; the most performant models in the inductive setting have employed path encoding modules in addition to standard subgraph encoding modules. This work similarly focuses on KG completion in the inductive setting, without the explicit use of path encodings, which can be time-consuming and introduces several hyperparameters that require costly hyperparameter optimization. Our approach uses a Transformer-based subgraph encoding module only; we introduce connection-biased attention and entity role embeddings into the subgraph encoding module to eliminate the need for an expensive and time-consuming path encoding module. Evaluations on standard inductive KG completion benchmark datasets demonstrate that our Connection-Biased Link Prediction (CBLiP) model has superior performance to models that do not use path information. Compared to models that utilize path information, CBLiP shows competitive or superior performance while being faster. Additionally, to show that the effectiveness of connection-biased attention and entity role embeddings also holds in the transductive setting, we compare CBLiP's performance on the relation prediction task in the transductive setting.<|reference_end|>
arxiv
@article{dutta2024replacing, title={Replacing Paths with Connection-Biased Attention for Knowledge Graph Completion}, author={Sharmishtha Dutta, Alex Gittens, Mohammed J. Zaki, Charu C. Aggarwal}, journal={arXiv preprint arXiv:2410.00876}, year={2024}, archivePrefix={arXiv}, eprint={2410.00876}, primaryClass={cs.LG} }
dutta2024replacing
arxiv-664169
2410.00878
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective
<|reference_start|>Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective: The perturbation analysis of linear solvers applied to systems arising broadly in machine learning settings -- for instance, when using linear regression models -- establishes an important perspective when reframing these analyses through the lens of a data poisoning attack. By analyzing solvers' responses to such attacks, this work aims to contribute to the development of more robust linear solvers and provide insights into poisoning attacks on linear solvers. In particular, we investigate how the errors in the input data will affect the fitting error and accuracy of the solution from a linear system-solving algorithm under perturbations common in adversarial attacks. We propose data perturbation through two distinct knowledge levels, developing a poisoning optimization and studying two methods of perturbation: Label-guided Perturbation (LP) and Unconditioning Perturbation (UP). Existing works mainly focus on deriving the worst-case perturbation bound from a theoretical perspective, and the analysis is often limited to specific kinds of linear system solvers. Under the circumstance that the data is intentionally perturbed -- as is the case with data poisoning -- we seek to understand how different kinds of solvers react to these perturbations, identifying those algorithms most impacted by different types of adversarial attacks.<|reference_end|>
arxiv
@article{liu2024empirical, title={Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective}, author={Yixin Liu, Arielle Carr, Lichao Sun}, journal={arXiv preprint arXiv:2410.00878}, year={2024}, archivePrefix={arXiv}, eprint={2410.00878}, primaryClass={cs.LG cs.CR cs.NA math.NA} }
liu2024empirical
arxiv-664170
2410.00880
GEMS: Generative Expert Metric System through Iterative Prompt Priming
<|reference_start|>GEMS: Generative Expert Metric System through Iterative Prompt Priming: Across domains, metrics and measurements are fundamental to identifying challenges, informing decisions, and resolving conflicts. Despite the abundance of data available in this information age, not only can it be challenging for a single expert to work across multi-disciplinary data, but non-experts can also find it unintuitive to create effective measures or transform theories into context-specific metrics that are chosen appropriately. This technical report addresses this challenge by examining software communities within large software corporations, where different measures are used as proxies to locate counterparts within the organization to transfer tacit knowledge. We propose a prompt-engineering framework inspired by neural activities, demonstrating that generative models can extract and summarize theories and perform basic reasoning, thereby transforming concepts into context-aware metrics to support software communities given software repository data. While this research zoomed in on software communities, we believe the framework's applicability extends across various fields, showcasing expert-theory-inspired metrics that aid in triaging complex challenges.<|reference_end|>
arxiv
@article{cheng2024gems:, title={GEMS: Generative Expert Metric System through Iterative Prompt Priming}, author={Ti-Chung Cheng, Carmen Badea, Christian Bird, Thomas Zimmermann, Robert DeLine, Nicole Forsgren, Denae Ford}, journal={arXiv preprint arXiv:2410.00880}, year={2024}, archivePrefix={arXiv}, eprint={2410.00880}, primaryClass={cs.SE cs.AI} }
cheng2024gems:
arxiv-664171
2410.00882
Lazy brute-force sampling: A universal perfect sampling scheme from Markov chains
<|reference_start|>Lazy brute-force sampling: A universal perfect sampling scheme from Markov chains: We show that, under mild assumptions, every distribution on the hypercube $\{0, 1\}^{n}$ that admits a polynomial-time Markov chain approximate sampler also has an exact sampling algorithm with expected running time in poly$(n)$.<|reference_end|>
arxiv
@article{göbel2024lazy, title={Lazy brute-force sampling: A universal perfect sampling scheme from Markov chains}, author={Andreas G"obel and Marcus Pappik}, journal={arXiv preprint arXiv:2410.00882}, year={2024}, archivePrefix={arXiv}, eprint={2410.00882}, primaryClass={cs.CC math.PR} }
göbel2024lazy
arxiv-664172
2410.00884
Low-Latency Sliding Window Connectivity
<|reference_start|>Low-Latency Sliding Window Connectivity: Connectivity queries, which check whether vertices belong to the same connected component, are fundamental in graph computations. Sliding window connectivity processes these queries over sliding windows, facilitating real-time streaming graph analytics. However, existing methods struggle with low-latency processing due to the significant overhead of continuously updating index structures as edges are inserted and deleted. We introduce a novel approach that leverages spanning trees to efficiently process queries. The novelty of this method lies in its ability to maintain spanning trees efficiently as window updates occur. Notably, our approach completely eliminates the need for replacement edge searches, a traditional bottleneck in managing spanning trees during edge deletions. We also present several optimizations to maximize the potential of spanning-tree-based indexes. Our comprehensive experimental evaluation shows that index update latency in spanning trees can be reduced by up to 458x while maintaining query performance, leading to an 8x improvement in throughput. Our approach also significantly outperforms the state-of-the-art in both query processing and index updates. Additionally, our methods use significantly less memory and demonstrate consistent efficiency across various settings.<|reference_end|>
arxiv
@article{zhang2024low-latency, title={Low-Latency Sliding Window Connectivity}, author={Chao Zhang, Angela Bonifati, Tamer "Ozsu}, journal={arXiv preprint arXiv:2410.00884}, year={2024}, archivePrefix={arXiv}, eprint={2410.00884}, primaryClass={cs.DB} }
zhang2024low-latency
arxiv-664173
2410.00890
Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation
<|reference_start|>Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation: Generating high-quality 3D content from text, single images, or sparse view images remains a challenging task with broad applications. Existing methods typically employ multi-view diffusion models to synthesize multi-view images, followed by a feed-forward process for 3D reconstruction. However, these approaches are often constrained by a small and fixed number of input views, limiting their ability to capture diverse viewpoints and, even worse, leading to suboptimal generation results if the synthesized views are of poor quality. To address these limitations, we propose Flex3D, a novel two-stage framework capable of leveraging an arbitrary number of high-quality input views. The first stage consists of a candidate view generation and curation pipeline. We employ a fine-tuned multi-view image diffusion model and a video diffusion model to generate a pool of candidate views, enabling a rich representation of the target 3D object. Subsequently, a view selection pipeline filters these views based on quality and consistency, ensuring that only the high-quality and reliable views are used for reconstruction. In the second stage, the curated views are fed into a Flexible Reconstruction Model (FlexRM), built upon a transformer architecture that can effectively process an arbitrary number of inputs. FlemRM directly outputs 3D Gaussian points leveraging a tri-plane representation, enabling efficient and detailed 3D generation. Through extensive exploration of design and training strategies, we optimize FlexRM to achieve superior performance in both reconstruction and generation tasks. Our results demonstrate that Flex3D achieves state-of-the-art performance, with a user study winning rate of over 92% in 3D generation tasks when compared to several of the latest feed-forward 3D generative models.<|reference_end|>
arxiv
@article{han2024flex3d:, title={Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation}, author={Junlin Han, Jianyuan Wang, Andrea Vedaldi, Philip Torr, Filippos Kokkinos}, journal={arXiv preprint arXiv:2410.00890}, year={2024}, archivePrefix={arXiv}, eprint={2410.00890}, primaryClass={cs.CV cs.GR eess.IV} }
han2024flex3d:
arxiv-664174
2410.00896
Outage-Constrained Sum Secrecy Rate Maximization for STAR-RIS with Energy-Harvesting Eavesdroppers
<|reference_start|>Outage-Constrained Sum Secrecy Rate Maximization for STAR-RIS with Energy-Harvesting Eavesdroppers: This article proposes a novel strategy for enhancing secure wireless communication through the use of a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) in a multiple-input single-output system. In the presence of energy-harvesting eavesdroppers, the study aims to maximize the secrecy rate while adhering to strict energy harvesting constraints. By dynamically manipulating the wireless environment with the STAR-RIS, the research examines the balance between harvested energy and secrecy rate under two key protocols: energy splitting and mode selection. The study addresses both imperfect and perfect channel state information (CSI) and formulates a complex non-convex optimization problem, which is solved using a penalty concave convex procedure combined with an alternating optimization algorithm. The method optimizes beamforming and STAR-RIS transmission and reflection coefficients to achieve a optimal balance between secure communication and energy harvesting constraints. Numerical simulations show that the proposed approach is effective, even with imperfect CSI, and outperforms conventional RIS methods in terms of robust security and energy performance.<|reference_end|>
arxiv
@article{rostamikafaki2024outage-constrained, title={Outage-Constrained Sum Secrecy Rate Maximization for STAR-RIS with Energy-Harvesting Eavesdroppers}, author={Zahra Rostamikafaki, Francois Chan, and Claude D'Amours}, journal={arXiv preprint arXiv:2410.00896}, year={2024}, archivePrefix={arXiv}, eprint={2410.00896}, primaryClass={eess.SY cs.SY eess.SP} }
rostamikafaki2024outage-constrained
arxiv-664175
2410.00897
The Gradient of Health Data Privacy
<|reference_start|>The Gradient of Health Data Privacy: In the era of digital health and artificial intelligence, the management of patient data privacy has become increasingly complex, with significant implications for global health equity and patient trust. This paper introduces a novel "privacy gradient" approach to health data governance, offering a more nuanced and adaptive framework than traditional binary privacy models. Our multidimensional concept considers factors such as data sensitivity, stakeholder relationships, purpose of use, and temporal aspects, allowing for context-sensitive privacy protections. Through policy analyses, ethical considerations, and case studies spanning adolescent health, integrated care, and genomic research, we demonstrate how this approach can address critical privacy challenges in diverse healthcare settings worldwide. The privacy gradient model has the potential to enhance patient engagement, improve care coordination, and accelerate medical research while safeguarding individual privacy rights. We provide policy recommendations for implementing this approach, considering its impact on healthcare systems, research infrastructures, and global health initiatives. This work aims to inform policymakers, healthcare leaders, and digital health innovators, contributing to a more equitable, trustworthy, and effective global health data ecosystem in the digital age.<|reference_end|>
arxiv
@article{lin2024the, title={The Gradient of Health Data Privacy}, author={Baihan Lin}, journal={arXiv preprint arXiv:2410.00897}, year={2024}, archivePrefix={arXiv}, eprint={2410.00897}, primaryClass={cs.CY cs.AI cs.HC q-bio.OT} }
lin2024the
arxiv-664176
2410.00900
OSSA: Unsupervised One-Shot Style Adaptation
<|reference_start|>OSSA: Unsupervised One-Shot Style Adaptation: Despite their success in various vision tasks, deep neural network architectures often underperform in out-of-distribution scenarios due to the difference between training and target domain style. To address this limitation, we introduce One-Shot Style Adaptation (OSSA), a novel unsupervised domain adaptation method for object detection that utilizes a single, unlabeled target image to approximate the target domain style. Specifically, OSSA generates diverse target styles by perturbing the style statistics derived from a single target image and then applies these styles to a labeled source dataset at the feature level using Adaptive Instance Normalization (AdaIN). Extensive experiments show that OSSA establishes a new state-of-the-art among one-shot domain adaptation methods by a significant margin, and in some cases, even outperforms strong baselines that use thousands of unlabeled target images. By applying OSSA in various scenarios, including weather, simulated-to-real (sim2real), and visual-to-thermal adaptations, our study explores the overarching significance of the style gap in these contexts. OSSA's simplicity and efficiency allow easy integration into existing frameworks, providing a potentially viable solution for practical applications with limited data availability. Code is available at https://github.com/RobinGerster7/OSSA<|reference_end|>
arxiv
@article{gerster2024ossa:, title={OSSA: Unsupervised One-Shot Style Adaptation}, author={Robin Gerster, Holger Caesar, Matthias Rapp, Alexander Wolpert, and Michael Teutsch}, journal={arXiv preprint arXiv:2410.00900}, year={2024}, archivePrefix={arXiv}, eprint={2410.00900}, primaryClass={cs.CV} }
gerster2024ossa:
arxiv-664177
2410.00903
Causal Representation Learning with Generative Artificial Intelligence: Application to Texts as Treatments
<|reference_start|>Causal Representation Learning with Generative Artificial Intelligence: Application to Texts as Treatments: In this paper, we demonstrate how to enhance the validity of causal inference with unstructured high-dimensional treatments like texts, by leveraging the power of generative Artificial Intelligence. Specifically, we propose to use a deep generative model such as large language models (LLMs) to efficiently generate treatments and use their internal representation for subsequent causal effect estimation. We show that the knowledge of this true internal representation helps separate the treatment features of interest, such as specific sentiments and certain topics, from other possibly unknown confounding features. Unlike the existing methods, our proposed approach eliminates the need to learn causal representation from the data and hence produces more accurate and efficient estimates. We formally establish the conditions required for the nonparametric identification of the average treatment effect, propose an estimation strategy that avoids the violation of the overlap assumption, and derive the asymptotic properties of the proposed estimator through the application of double machine learning. Finally, using an instrumental variables approach, we extend the proposed methodology to the settings, in which the treatment feature is based on human perception rather than is assumed to be fixed given the treatment object. We conduct simulation studies using the generated text data with an open-source LLM, Llama3, to illustrate the advantages of our estimator over the state-of-the-art causal representation learning algorithms.<|reference_end|>
arxiv
@article{imai2024causal, title={Causal Representation Learning with Generative Artificial Intelligence: Application to Texts as Treatments}, author={Kosuke Imai, Kentaro Nakamura}, journal={arXiv preprint arXiv:2410.00903}, year={2024}, archivePrefix={arXiv}, eprint={2410.00903}, primaryClass={stat.AP cs.CL cs.LG} }
imai2024causal
arxiv-664178
2410.00905
Removing Distributional Discrepancies in Captions Improves Image-Text Alignment
<|reference_start|>Removing Distributional Discrepancies in Captions Improves Image-Text Alignment: In this paper, we introduce a model designed to improve the prediction of image-text alignment, targeting the challenge of compositional understanding in current visual-language models. Our approach focuses on generating high-quality training datasets for the alignment task by producing mixed-type negative captions derived from positive ones. Critically, we address the distribution imbalance between positive and negative captions to ensure that the alignment model does not depend solely on textual information but also considers the associated images for predicting alignment accurately. By creating this enhanced training data, we fine-tune an existing leading visual-language model to boost its capability in understanding alignment. Our model significantly outperforms current top-performing methods across various datasets. We also demonstrate the applicability of our model by ranking the images generated by text-to-image models based on text alignment. Project page: \url{https://yuheng-li.github.io/LLaVA-score/}<|reference_end|>
arxiv
@article{li2024removing, title={Removing Distributional Discrepancies in Captions Improves Image-Text Alignment}, author={Yuheng Li, Haotian Liu, Mu Cai, Yijun Li, Eli Shechtman, Zhe Lin, Yong Jae Lee, Krishna Kumar Singh}, journal={arXiv preprint arXiv:2410.00905}, year={2024}, archivePrefix={arXiv}, eprint={2410.00905}, primaryClass={cs.CV} }
li2024removing
arxiv-664179
2410.00906
Generative AI and Perceptual Harms: Who's Suspected of using LLMs?
<|reference_start|>Generative AI and Perceptual Harms: Who's Suspected of using LLMs?: Large language models (LLMs) are increasingly integrated into a variety of writing tasks. While these tools can help people by generating ideas or producing higher quality work, like many other AI tools they may risk causing a variety of harms, disproportionately burdening historically marginalized groups. In this work, we introduce and evaluate perceptual harm, a term for the harm caused to users when others perceive or suspect them of using AI. We examined perceptual harms in three online experiments, each of which entailed human participants evaluating the profiles for fictional freelance writers. We asked participants whether they suspected the freelancers of using AI, the quality of their writing, and whether they should be hired. We found some support for perceptual harms against for certain demographic groups, but that perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.<|reference_end|>
arxiv
@article{kadoma2024generative, title={Generative AI and Perceptual Harms: Who's Suspected of using LLMs?}, author={Kowe Kadoma, Dana"e Metaxa, Mor Naaman}, journal={arXiv preprint arXiv:2410.00906}, year={2024}, archivePrefix={arXiv}, eprint={2410.00906}, primaryClass={cs.HC} }
kadoma2024generative
arxiv-664180
2410.00907
Addition is All You Need for Energy-efficient Language Models
<|reference_start|>Addition is All You Need for Energy-efficient Language Models: Large neural networks spend most computation on floating point tensor multiplications. In this work, we find that a floating point multiplier can be approximated by one integer adder with high precision. We propose the linear-complexity multiplication L-Mul algorithm that approximates floating point number multiplication with integer addition operations. The new algorithm costs significantly less computation resource than 8-bit floating point multiplication but achieves higher precision. Compared to 8-bit floating point multiplications, the proposed method achieves higher precision but consumes significantly less bit-level computation. Since multiplying floating point numbers requires substantially higher energy compared to integer addition operations, applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element-wise floating point tensor multiplications and 80% energy cost of dot products. We calculated the theoretical error expectation of L-Mul, and evaluated the algorithm on a wide range of textual, visual, and symbolic tasks, including natural language understanding, structural reasoning, mathematics, and commonsense question answering. Our numerical analysis experiments agree with the theoretical error estimation, which indicates that L-Mul with 4-bit mantissa achieves comparable precision as float8_e4m3 multiplications, and L-Mul with 3-bit mantissa outperforms float8_e5m2. Evaluation results on popular benchmarks show that directly applying L-Mul to the attention mechanism is almost lossless. We further show that replacing all floating point multiplications with 3-bit mantissa L-Mul in a transformer model achieves equivalent precision as using float8_e4m3 as accumulation precision in both fine-tuning and inference.<|reference_end|>
arxiv
@article{luo2024addition, title={Addition is All You Need for Energy-efficient Language Models}, author={Hongyin Luo, Wei Sun}, journal={arXiv preprint arXiv:2410.00907}, year={2024}, archivePrefix={arXiv}, eprint={2410.00907}, primaryClass={cs.CL} }
luo2024addition
arxiv-664181
2410.00911
Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning
<|reference_start|>Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning: Domain-Incremental Learning (DIL) involves the progressive adaptation of a model to new concepts across different domains. While recent advances in pre-trained models provide a solid foundation for DIL, learning new concepts often results in the catastrophic forgetting of pre-trained knowledge. Specifically, sequential model updates can overwrite both the representation and the classifier with knowledge from the latest domain. Thus, it is crucial to develop a representation and corresponding classifier that accommodate all seen domains throughout the learning process. To this end, we propose DUal ConsolidaTion (Duct) to unify and consolidate historical knowledge at both the representation and classifier levels. By merging the backbone of different stages, we create a representation space suitable for multiple domains incrementally. The merged representation serves as a balanced intermediary that captures task-specific features from all seen domains. Additionally, to address the mismatch between consolidated embeddings and the classifier, we introduce an extra classifier consolidation process. Leveraging class-wise semantic information, we estimate the classifier weights of old domains within the latest embedding space. By merging historical and estimated classifiers, we align them with the consolidated embedding space, facilitating incremental classification. Extensive experimental results on four benchmark datasets demonstrate Duct's state-of-the-art performance.<|reference_end|>
arxiv
@article{zhou2024dual, title={Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning}, author={Da-Wei Zhou, Zi-Wen Cai, Han-Jia Ye, Lijun Zhang, De-Chuan Zhan}, journal={arXiv preprint arXiv:2410.00911}, year={2024}, archivePrefix={arXiv}, eprint={2410.00911}, primaryClass={cs.CV cs.LG} }
zhou2024dual
arxiv-664182
2410.00916
IBM Quantum Computers: Evolution, Performance, and Future Directions
<|reference_start|>IBM Quantum Computers: Evolution, Performance, and Future Directions: Quantum computers represent a transformative frontier in computational technology, promising exponential speedups beyond classical computing limits. IBM Quantum has led significant advancements in both hardware and software, providing access to quantum hardware via IBM Cloud since 2016, achieving a milestone with the world's first accessible quantum computer. This article explores IBM's quantum computing journey, focusing on the development of practical quantum computers. We summarize the evolution and advancements of IBM Quantum's processors across generations, including their recent breakthrough surpassing the 1,000-qubit barrier. The paper reviews detailed performance metrics across various hardware, tracing their evolution over time and highlighting IBM Quantum's transition from the noisy intermediate-scale quantum (NISQ) computing era towards fault-tolerant quantum computing capabilities.<|reference_end|>
arxiv
@article{abughanem2024ibm, title={IBM Quantum Computers: Evolution, Performance, and Future Directions}, author={M. AbuGhanem}, journal={arXiv preprint arXiv:2410.00916}, year={2024}, archivePrefix={arXiv}, eprint={2410.00916}, primaryClass={quant-ph cs.AI cs.AR} }
abughanem2024ibm
arxiv-664183
2410.00917
Google Quantum AI's Quest for Error-Corrected Quantum Computers
<|reference_start|>Google Quantum AI's Quest for Error-Corrected Quantum Computers: Quantum computers stand at the forefront of technological innovation, offering exponential computational speed-ups that challenge classical computing capabilities. At the cutting edge of this transformation is Google Quantum AI, a leader in driving forward the development of practical quantum computers. This article provides a comprehensive review of Google Quantum AI's pivotal role in the quantum computing landscape over the past decade, emphasizing their significant strides towards achieving quantum computational supremacy. By exploring their advancements and contributions in quantum hardware, quantum software, error correction, and quantum algorithms, this study highlights the transformative impact of Google Quantum AI's initiatives in shaping the future of quantum computing technology.<|reference_end|>
arxiv
@article{abughanem2024google, title={Google Quantum AI's Quest for Error-Corrected Quantum Computers}, author={M. AbuGhanem}, journal={arXiv preprint arXiv:2410.00917}, year={2024}, archivePrefix={arXiv}, eprint={2410.00917}, primaryClass={quant-ph cs.AR} }
abughanem2024google
arxiv-664184
2410.00921
PREPARE: PREdicting PAndemic's REcurring Waves Amidst Mutations, Vaccination, and Lockdowns
<|reference_start|>PREPARE: PREdicting PAndemic's REcurring Waves Amidst Mutations, Vaccination, and Lockdowns: This study releases an adaptable framework that can provide insights to policymakers to predict the complex recurring waves of the pandemic in the medium postemergence of the virus spread, a phase marked by rapidly changing factors like virus mutations, lockdowns, and vaccinations, offering a way to forecast infection trends and stay ahead of future outbreaks even amidst uncertainty. The proposed model is validated on data from COVID-19 spread in Germany.<|reference_end|>
arxiv
@article{shahtori2024prepare:, title={PREPARE: PREdicting PAndemic's REcurring Waves Amidst Mutations, Vaccination, and Lockdowns}, author={Narges M.Shahtori and S.Farokh Atashzar}, journal={arXiv preprint arXiv:2410.00921}, year={2024}, archivePrefix={arXiv}, eprint={2410.00921}, primaryClass={q-bio.PE cs.NA cs.SY eess.SY math.NA math.PR nlin.CD} }
shahtori2024prepare:
arxiv-664185
2410.00923
On the topology and geometry of population-based SHM
<|reference_start|>On the topology and geometry of population-based SHM: Population-Based Structural Health Monitoring (PBSHM), aims to leverage information across populations of structures in order to enhance diagnostics on those with sparse data. The discipline of transfer learning provides the mechanism for this capability. One recent paper in PBSHM proposed a geometrical view in which the structures were represented as graphs in a metric "base space" with their data captured in the "total space" of a vector bundle above the graph space. This view was more suggestive than mathematically rigorous, although it did allow certain useful arguments. One bar to more rigorous analysis was the absence of a meaningful topology on the graph space, and thus no useful notion of continuity. The current paper aims to address this problem, by moving to parametric families of structures in the base space, essentially changing points in the graph space to open balls. This allows the definition of open sets in the fibre space and thus allows continuous variation between fibres. The new ideas motivate a new geometrical mechanism for transfer learning in data are transported from one fibre to an adjacent one; i.e., from one structure to another.<|reference_end|>
arxiv
@article{worden2024on, title={On the topology and geometry of population-based SHM}, author={Keith Worden, Tina A. Dardeno, Aidan J. Hughes, George Tsialiamanis}, journal={arXiv preprint arXiv:2410.00923}, year={2024}, archivePrefix={arXiv}, eprint={2410.00923}, primaryClass={stat.ML cs.DB cs.LG eess.SP} }
worden2024on
arxiv-664186
2410.00927
Text Clustering as Classification with LLMs
<|reference_start|>Text Clustering as Classification with LLMs: Text clustering remains valuable in real-world applications where manual labeling is cost-prohibitive. It facilitates efficient organization and analysis of information by grouping similar texts based on their representations. However, implementing this approach necessitates fine-tuned embedders for downstream data and sophisticated similarity metrics. To address this issue, this study presents a novel framework for text clustering that effectively leverages the in-context learning capacity of Large Language Models (LLMs). Instead of fine-tuning embedders, we propose to transform the text clustering into a classification task via LLM. First, we prompt LLM to generate potential labels for a given dataset. Second, after integrating similar labels generated by the LLM, we prompt the LLM to assign the most appropriate label to each sample in the dataset. Our framework has been experimentally proven to achieve comparable or superior performance to state-of-the-art clustering methods that employ embeddings, without requiring complex fine-tuning or clustering algorithms. We make our code available to the public for utilization at https://anonymous.4open.science/r/Text-Clustering-via-LLM-E500.<|reference_end|>
arxiv
@article{huang2024text, title={Text Clustering as Classification with LLMs}, author={Chen Huang and Guoxiu He}, journal={arXiv preprint arXiv:2410.00927}, year={2024}, archivePrefix={arXiv}, eprint={2410.00927}, primaryClass={cs.CL cs.IR} }
huang2024text
arxiv-664187
2410.00929
A Knowledge-Informed Large Language Model Framework for US Nuclear Power Plant Shutdown Initiating Event Classification for Probabilistic Risk Assessment
<|reference_start|>A Knowledge-Informed Large Language Model Framework for US Nuclear Power Plant Shutdown Initiating Event Classification for Probabilistic Risk Assessment: Identifying and classifying shutdown initiating events (SDIEs) is critical for developing low power shutdown probabilistic risk assessment for nuclear power plants. Existing computational approaches cannot achieve satisfactory performance due to the challenges of unavailable large, labeled datasets, imbalanced event types, and label noise. To address these challenges, we propose a hybrid pipeline that integrates a knowledge-informed machine learning mode to prescreen non-SDIEs and a large language model (LLM) to classify SDIEs into four types. In the prescreening stage, we proposed a set of 44 SDIE text patterns that consist of the most salient keywords and phrases from six SDIE types. Text vectorization based on the SDIE patterns generates feature vectors that are highly separable by using a simple binary classifier. The second stage builds Bidirectional Encoder Representations from Transformers (BERT)-based LLM, which learns generic English language representations from self-supervised pretraining on a large dataset and adapts to SDIE classification by fine-tuning it on an SDIE dataset. The proposed approaches are evaluated on a dataset with 10,928 events using precision, recall ratio, F1 score, and average accuracy. The results demonstrate that the prescreening stage can exclude more than 97% non-SDIEs, and the LLM achieves an average accuracy of 93.4% for SDIE classification.<|reference_end|>
arxiv
@article{xian2024a, title={A Knowledge-Informed Large Language Model Framework for U.S. Nuclear Power Plant Shutdown Initiating Event Classification for Probabilistic Risk Assessment}, author={Min Xian, Tao Wang, Sai Zhang, Fei Xu, Zhegang Ma}, journal={arXiv preprint arXiv:2410.00929}, year={2024}, archivePrefix={arXiv}, eprint={2410.00929}, primaryClass={cs.AI cs.LG} }
xian2024a
arxiv-664188
2410.00930
ACEV: Unsupervised Intersecting Manifold Segmentation using Adaptation to Angular Change of Eigenvectors in Intrinsic Dimension
<|reference_start|>ACEV: Unsupervised Intersecting Manifold Segmentation using Adaptation to Angular Change of Eigenvectors in Intrinsic Dimension: Intersecting manifold segmentation has been a focus of research, where individual manifolds, that intersect with other manifolds, are separated to discover their distinct properties. The proposed method is based on the intuition that when a manifold in $D$ dimensional space with an intrinsic dimension of $d$ intersects with another manifold, the data variance grows in more than $d$ directions. The proposed method measures local data variances and determines their vector directions. It counts the number of vectors with non-zero variance, which determines the manifold's intrinsic dimension. For detection of the intersection region, the method adapts to the changes in the angular gaps between the corresponding direction vectors of the child and parent using exponential moving averages using a tree structure construction. Accordingly, it includes those data points in the same manifold whose neighborhood is within the adaptive angular difference and eventually identifies the data points in the intersection area of manifolds. Data points whose inclusion in the neighborhood-identified data points increases their intrinsic dimensionality are removed based on data variance and distance. The proposed method performs better than 18 SOTA manifold segmentation methods in ARI and NMI scores over 14 real-world datasets with lesser time complexity and better stability.<|reference_end|>
arxiv
@article{boral2024acev:, title={ACEV: Unsupervised Intersecting Manifold Segmentation using Adaptation to Angular Change of Eigenvectors in Intrinsic Dimension}, author={Subhadip Boral and Rikathi Pal and Ashish Ghosh}, journal={arXiv preprint arXiv:2410.00930}, year={2024}, archivePrefix={arXiv}, eprint={2410.00930}, primaryClass={cs.LG cs.AI cs.CG} }
boral2024acev:
arxiv-664189
2410.00933
StreamEnsemble: Predictive Queries over Spatiotemporal Streaming Data
<|reference_start|>StreamEnsemble: Predictive Queries over Spatiotemporal Streaming Data: Predictive queries over spatiotemporal (ST) stream data pose significant data processing and analysis challenges. ST data streams involve a set of time series whose data distributions may vary in space and time, exhibiting multiple distinct patterns. In this context, assuming a single machine learning model would adequately handle such variations is likely to lead to failure. To address this challenge, we propose StreamEnsemble, a novel approach to predictive queries over ST data that dynamically selects and allocates Machine Learning models according to the underlying time series distributions and model characteristics. Our experimental evaluation reveals that this method markedly outperforms traditional ensemble methods and single model approaches in terms of accuracy and time, demonstrating a significant reduction in prediction error of more than 10 times compared to traditional approaches.<|reference_end|>
arxiv
@article{chaves2024streamensemble:, title={StreamEnsemble: Predictive Queries over Spatiotemporal Streaming Data}, author={Anderson Chaves, Eduardo Ogasawara, Patrick Valduriez, Fabio Porto}, journal={arXiv preprint arXiv:2410.00933}, year={2024}, archivePrefix={arXiv}, eprint={2410.00933}, primaryClass={stat.ML cs.AI cs.LG} }
chaves2024streamensemble:
arxiv-664190
2410.00938
MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
<|reference_start|>MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards: The rapid scaling of large language models necessitates more lightweight finetuning methods to reduce the explosive GPU memory overhead when numerous customized models are served simultaneously. Targeting more parameter-efficient low-rank adaptation (LoRA), parameter sharing presents a promising solution. Empirically, our research into high-level sharing principles highlights the indispensable role of differentiation in reversing the detrimental effects of pure sharing. Guided by this finding, we propose Mixture of Shards (MoS), incorporating both inter-layer and intra-layer sharing schemes, and integrating four nearly cost-free differentiation strategies, namely subset selection, pair dissociation, vector sharding, and shard privatization. Briefly, it selects a designated number of shards from global pools with a Mixture-of-Experts (MoE)-like routing mechanism before sequentially concatenating them to low-rank matrices. Hence, it retains all the advantages of LoRA while offering enhanced parameter efficiency, and effectively circumvents the drawbacks of peer parameter-sharing methods. Our empirical experiments demonstrate approximately 8x parameter savings in a standard LoRA setting. The ablation study confirms the significance of each component. Our insights into parameter sharing and MoS method may illuminate future developments of more parameter-efficient finetuning methods.<|reference_end|>
arxiv
@article{wang2024mos:, title={MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards}, author={Sheng Wang, Liheng Chen, Pengan Chen, Jingwei Dong, Boyang Xue, Jiyue Jiang, Lingpeng Kong, Chuan Wu}, journal={arXiv preprint arXiv:2410.00938}, year={2024}, archivePrefix={arXiv}, eprint={2410.00938}, primaryClass={cs.LG} }
wang2024mos:
arxiv-664191
2410.00940
Automatic Speech Recognition for the Ika Language
<|reference_start|>Automatic Speech Recognition for the Ika Language: We present a cost-effective approach for developing Automatic Speech Recognition (ASR) models for low-resource languages like Ika. We fine-tune the pretrained wav2vec 2.0 Massively Multilingual Speech Models on a high-quality speech dataset compiled from New Testament Bible translations in Ika. Our results show that fine-tuning multilingual pretrained models achieves a Word Error Rate (WER) of 0.5377 and Character Error Rate (CER) of 0.2651 with just over 1 hour of training data. The larger 1 billion parameter model outperforms the smaller 300 million parameter model due to its greater complexity and ability to store richer speech representations. However, we observe overfitting to the small training dataset, reducing generalizability. Our findings demonstrate the potential of leveraging multilingual pretrained models for low-resource languages. Future work should focus on expanding the dataset and exploring techniques to mitigate overfitting.<|reference_end|>
arxiv
@article{nzenwata2024automatic, title={Automatic Speech Recognition for the Ika Language}, author={Uchenna Nzenwata, Daniel Ogbuigwe}, journal={arXiv preprint arXiv:2410.00940}, year={2024}, archivePrefix={arXiv}, eprint={2410.00940}, primaryClass={cs.CL} }
nzenwata2024automatic
arxiv-664192
2410.00942
AR-Sieve Bootstrap for the Random Forest and a simulation-based comparison with rangerts time series prediction
<|reference_start|>AR-Sieve Bootstrap for the Random Forest and a simulation-based comparison with rangerts time series prediction: The Random Forest (RF) algorithm can be applied to a broad spectrum of problems, including time series prediction. However, neither the classical IID (Independent and Identically distributed) bootstrap nor block bootstrapping strategies (as implemented in rangerts) completely account for the nature of the Data Generating Process (DGP) while resampling the observations. We propose the combination of RF with a residual bootstrapping technique where we replace the IID bootstrap with the AR-Sieve Bootstrap (ARSB), which assumes the DGP to be an autoregressive process. To assess the new model's predictive performance, we conduct a simulation study using synthetic data generated from different types of DGPs. It turns out that ARSB provides more variation amongst the trees in the forest. Moreover, RF with ARSB shows greater accuracy compared to RF with other bootstrap strategies. However, these improvements are achieved at some efficiency costs.<|reference_end|>
arxiv
@article{fokam2024ar-sieve, title={AR-Sieve Bootstrap for the Random Forest and a simulation-based comparison with rangerts time series prediction}, author={Cabrel Teguemne Fokam and Carsten Jentsch and Michel Lang and Markus Pauly}, journal={arXiv preprint arXiv:2410.00942}, year={2024}, archivePrefix={arXiv}, eprint={2410.00942}, primaryClass={stat.ML cs.LG} }
fokam2024ar-sieve
arxiv-664193
2410.00943
RisingBALLER: A player is a token, a match is a sentence, A path towards a foundational model for football players data analytics
<|reference_start|>RisingBALLER: A player is a token, a match is a sentence, A path towards a foundational model for football players data analytics: In this paper, I introduce RisingBALLER, the first publicly available approach that leverages a transformer model trained on football match data to learn match-specific player representations. Drawing inspiration from advances in language modeling, RisingBALLER treats each football match as a unique sequence in which players serve as tokens, with their embeddings shaped by the specific context of the match. Through the use of masked player prediction (MPP) as a pre-training task, RisingBALLER learns foundational features for football player representations, similar to how language models learn semantic features for text representations. As a downstream task, I introduce next match statistics prediction (NMSP) to showcase the effectiveness of the learned player embeddings. The NMSP model surpasses a strong baseline commonly used for performance forecasting within the community. Furthermore, I conduct an in-depth analysis to demonstrate how the learned embeddings by RisingBALLER can be used in various football analytics tasks, such as producing meaningful positional features that capture the essence and variety of player roles beyond rigid x,y coordinates, team cohesion estimation, and similar player retrieval for more effective data-driven scouting. More than a simple machine learning model, RisingBALLER is a comprehensive framework designed to transform football data analytics by learning high-level foundational features for players, taking into account the context of each match. It offers a deeper understanding of football players beyond individual statistics.<|reference_end|>
arxiv
@article{adjileye2024risingballer:, title={RisingBALLER: A player is a token, a match is a sentence, A path towards a foundational model for football players data analytics}, author={Akedjou Achraff Adjileye}, journal={arXiv preprint arXiv:2410.00943}, year={2024}, archivePrefix={arXiv}, eprint={2410.00943}, primaryClass={cs.LG} }
adjileye2024risingballer:
arxiv-664194
2410.00944
GAMMA-PD: Graph-based Analysis of Multi-Modal Motor Impairment Assessments in Parkinson's Disease
<|reference_start|>GAMMA-PD: Graph-based Analysis of Multi-Modal Motor Impairment Assessments in Parkinson's Disease: The rapid advancement of medical technology has led to an exponential increase in multi-modal medical data, including imaging, genomics, and electronic health records (EHRs). Graph neural networks (GNNs) have been widely used to represent this data due to their prominent performance in capturing pairwise relationships. However, the heterogeneity and complexity of multi-modal medical data still pose significant challenges for standard GNNs, which struggle with learning higher-order, non-pairwise relationships. This paper proposes GAMMA-PD (Graph-based Analysis of Multi-modal Motor Impairment Assessments in Parkinson's Disease), a novel heterogeneous hypergraph fusion framework for multi-modal clinical data analysis. GAMMA-PD integrates imaging and non-imaging data into a "hypernetwork" (patient population graph) by preserving higher-order information and similarity between patient profiles and symptom subtypes. We also design a feature-based attention-weighted mechanism to interpret feature-level contributions towards downstream decision tasks. We evaluate our approach with clinical data from the Parkinson's Progression Markers Initiative (PPMI) and a private dataset. We demonstrate gains in predicting motor impairment symptoms in Parkinson's disease. Our end-to-end framework also learns associations between subsets of patient characteristics to generate clinically relevant explanations for disease and symptom profiles. The source code is available at https://github.com/favour-nerrise/GAMMA-PD.<|reference_end|>
arxiv
@article{nerrise2024gamma-pd:, title={GAMMA-PD: Graph-based Analysis of Multi-Modal Motor Impairment Assessments in Parkinson's Disease}, author={Favour Nerrise (1), Alice Louise Heiman (2), Ehsan Adeli (2,3) ((1) Department of Electrical Engineering, Stanford University, Stanford, CA, USA, (2) Department of Computer Science, Stanford University, Stanford, CA, USA, (3) Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA)}, journal={arXiv preprint arXiv:2410.00944}, year={2024}, archivePrefix={arXiv}, eprint={2410.00944}, primaryClass={q-bio.QM cs.AI cs.LG eess.IV q-bio.NC} }
nerrise2024gamma-pd:
arxiv-664195
2410.00945
Evaluating Deep Regression Models for WSI-Based Gene-Expression Prediction
<|reference_start|>Evaluating Deep Regression Models for WSI-Based Gene-Expression Prediction: Prediction of mRNA gene-expression profiles directly from routine whole-slide images (WSIs) using deep learning models could potentially offer cost-effective and widely accessible molecular phenotyping. While such WSI-based gene-expression prediction models have recently emerged within computational pathology, the high-dimensional nature of the corresponding regression problem offers numerous design choices which remain to be analyzed in detail. This study provides recommendations on how deep regression models should be trained for WSI-based gene-expression prediction. For example, we conclude that training a single model to simultaneously regress all 20530 genes is a computationally efficient yet very strong baseline.<|reference_end|>
arxiv
@article{gustafsson2024evaluating, title={Evaluating Deep Regression Models for WSI-Based Gene-Expression Prediction}, author={Fredrik K. Gustafsson, Mattias Rantalainen}, journal={arXiv preprint arXiv:2410.00945}, year={2024}, archivePrefix={arXiv}, eprint={2410.00945}, primaryClass={q-bio.GN cs.CV cs.LG} }
gustafsson2024evaluating
arxiv-664196
2410.00946
Spectral Graph Sample Weighting for Interpretable Sub-cohort Analysis in Predictive Models for Neuroimaging
<|reference_start|>Spectral Graph Sample Weighting for Interpretable Sub-cohort Analysis in Predictive Models for Neuroimaging: Recent advancements in medicine have confirmed that brain disorders often comprise multiple subtypes of mechanisms, developmental trajectories, or severity levels. Such heterogeneity is often associated with demographic aspects (e.g., sex) or disease-related contributors (e.g., genetics). Thus, the predictive power of machine learning models used for symptom prediction varies across subjects based on such factors. To model this heterogeneity, one can assign each training sample a factor-dependent weight, which modulates the subject's contribution to the overall objective loss function. To this end, we propose to model the subject weights as a linear combination of the eigenbases of a spectral population graph that captures the similarity of factors across subjects. In doing so, the learned weights smoothly vary across the graph, highlighting sub-cohorts with high and low predictability. Our proposed sample weighting scheme is evaluated on two tasks. First, we predict initiation of heavy alcohol drinking in young adulthood from imaging and neuropsychological measures from the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA). Next, we detect Dementia vs. Mild Cognitive Impairment (MCI) using imaging and demographic measurements in subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Compared to existing sample weighting schemes, our sample weights improve interpretability and highlight sub-cohorts with distinct characteristics and varying model accuracy.<|reference_end|>
arxiv
@article{paschali2024spectral, title={Spectral Graph Sample Weighting for Interpretable Sub-cohort Analysis in Predictive Models for Neuroimaging}, author={Magdalini Paschali, Yu Hang Jiang, Spencer Siegel, Camila Gonzalez, Kilian M. Pohl, Akshay Chaudhari and Qingyu Zhao}, journal={arXiv preprint arXiv:2410.00946}, year={2024}, archivePrefix={arXiv}, eprint={2410.00946}, primaryClass={eess.IV cs.LG} }
paschali2024spectral
arxiv-664197
2410.00948
Compressing Recurrent Neural Networks for FPGA-accelerated Implementation in Fluorescence Lifetime Imaging
<|reference_start|>Compressing Recurrent Neural Networks for FPGA-accelerated Implementation in Fluorescence Lifetime Imaging: Fluorescence lifetime imaging (FLI) is an important technique for studying cellular environments and molecular interactions, but its real-time application is limited by slow data acquisition, which requires capturing large time-resolved images and complex post-processing using iterative fitting algorithms. Deep learning (DL) models enable real-time inference, but can be computationally demanding due to complex architectures and large matrix operations. This makes DL models ill-suited for direct implementation on field-programmable gate array (FPGA)-based camera hardware. Model compression is thus crucial for practical deployment for real-time inference generation. In this work, we focus on compressing recurrent neural networks (RNNs), which are well-suited for FLI time-series data processing, to enable deployment on resource-constrained FPGA boards. We perform an empirical evaluation of various compression techniques, including weight reduction, knowledge distillation (KD), post-training quantization (PTQ), and quantization-aware training (QAT), to reduce model size and computational load while preserving inference accuracy. Our compressed RNN model, Seq2SeqLite, achieves a balance between computational efficiency and prediction accuracy, particularly at 8-bit precision. By applying KD, the model parameter size was reduced by 98\% while retaining performance, making it suitable for concurrent real-time FLI analysis on FPGA during data capture. This work represents a big step towards integrating hardware-accelerated real-time FLI analysis for fast biological processes.<|reference_end|>
arxiv
@article{erbas2024compressing, title={Compressing Recurrent Neural Networks for FPGA-accelerated Implementation in Fluorescence Lifetime Imaging}, author={Ismail Erbas, Vikas Pandey, Aporva Amarnath, Naigang Wang, Karthik Swaminathan, Stefan T. Radev, Xavier Intes}, journal={arXiv preprint arXiv:2410.00948}, year={2024}, archivePrefix={arXiv}, eprint={2410.00948}, primaryClass={eess.IV cs.LG q-bio.QM} }
erbas2024compressing
arxiv-664198
2410.00976
Learning Chaotic Dynamics with Embedded Dissipativity
<|reference_start|>Learning Chaotic Dynamics with Embedded Dissipativity: Chaotic dynamics, commonly seen in weather systems and fluid turbulence, are characterized by their sensitivity to initial conditions, which makes accurate prediction challenging. Despite its sensitivity to initial perturbations, many chaotic systems observe dissipative behaviors and ergodicity. Therefore, recently various approaches have been proposed to develop data-driven models preserving invariant statistics over long horizons. Although these methods have shown empirical success in reducing instances of unbounded trajectory generation, many of the models are still prone to generating unbounded trajectories, leading to invalid statistics evaluation. In this paper, we propose a novel neural network architecture that simultaneously learns a dissipative dynamics emulator that guarantees to generate bounded trajectories and an energy-like function that governs the dissipative behavior. More specifically, by leveraging control-theoretic ideas, we derive algebraic conditions based on the learned energy-like function that ensure asymptotic convergence to an invariant level set. Using these algebraic conditions, our proposed model enforces dissipativity through a ReLU projection layer, which provides formal trajectory boundedness guarantees. Furthermore, the invariant level set provides an outer estimate for the strange attractor, which is known to be very difficult to characterize due to its complex geometry. We demonstrate the capability of our model in producing bounded long-horizon trajectory forecasts and characterizing the attractor for chaotic dynamical systems including Lorenz 96 and a truncated Kuramoto-Sivashinsky equation.<|reference_end|>
arxiv
@article{tang2024learning, title={Learning Chaotic Dynamics with Embedded Dissipativity}, author={Sunbochen Tang, Themistoklis Sapsis, and Navid Azizan}, journal={arXiv preprint arXiv:2410.00976}, year={2024}, archivePrefix={arXiv}, eprint={2410.00976}, primaryClass={eess.SY cs.SY} }
tang2024learning
arxiv-664199
2410.00978
Uncovering the Viral Nature of Toxicity in Competitive Online Video Games
<|reference_start|>Uncovering the Viral Nature of Toxicity in Competitive Online Video Games: Toxicity is a widespread phenomenon in competitive online video games. In addition to its direct undesirable effects, there is a concern that toxicity can spread to others, amplifying the harm caused by a single player's misbehavior. In this study, we estimate whether and to what extent a player's toxic speech spreads, causing their teammates to behave similarly. To this end, we analyze proprietary data from the free-to-play first-person action game Call of Duty: Warzone. We formulate and implement an instrumental variable identification strategy that leverages the network of interactions among players across matches. Our analysis reveals that all else equal, all of a player's teammates engaging in toxic speech increases their probability of engaging in similar behavior by 26.1 to 30.3 times the average player's likelihood of engaging in toxic speech. These findings confirm the viral nature of toxicity, especially toxic speech, in competitive online video games.<|reference_end|>
arxiv
@article{morrier2024uncovering, title={Uncovering the Viral Nature of Toxicity in Competitive Online Video Games}, author={Jacob Morrier, Amine Mahmassani, R. Michael Alvarez}, journal={arXiv preprint arXiv:2410.00978}, year={2024}, archivePrefix={arXiv}, eprint={2410.00978}, primaryClass={cs.CY cs.HC econ.GN q-fin.EC} }
morrier2024uncovering
arxiv-664200
2410.00979
Towards Full-parameter and Parameter-efficient Self-learning For Endoscopic Camera Depth Estimation
<|reference_start|>Towards Full-parameter and Parameter-efficient Self-learning For Endoscopic Camera Depth Estimation: Adaptation methods are developed to adapt depth foundation models to endoscopic depth estimation recently. However, such approaches typically under-perform training since they limit the parameter search to a low-rank subspace and alter the training dynamics. Therefore, we propose a full-parameter and parameter-efficient learning framework for endoscopic depth estimation. At the first stage, the subspace of attention, convolution and multi-layer perception are adapted simultaneously within different sub-spaces. At the second stage, a memory-efficient optimization is proposed for subspace composition and the performance is further improved in the united sub-space. Initial experiments on the SCARED dataset demonstrate that results at the first stage improves the performance from 10.2% to 4.1% for Sq Rel, Abs Rel, RMSE and RMSE log in the comparison with the state-of-the-art models.<|reference_end|>
arxiv
@article{zhao2024towards, title={Towards Full-parameter and Parameter-efficient Self-learning For Endoscopic Camera Depth Estimation}, author={Shuting Zhao, Chenkang Du, Kristin Qi, Xinrong Chen, and Xinhan Di}, journal={arXiv preprint arXiv:2410.00979}, year={2024}, archivePrefix={arXiv}, eprint={2410.00979}, primaryClass={cs.CV cs.AI} }
zhao2024towards